LruCache


初始化

  /** * @param maxSize for caches that do not override {@link #sizeOf}, this is * the maximum number of entries in the cache. For all other caches, * this is the maximum sum of the sizes of the entries in this cache. */    public LruCache(int maxSize) {        if (maxSize <= 0) {            throw new IllegalArgumentException("maxSize <= 0");        }        this.maxSize = maxSize;        this.map = new LinkedHashMap<K, V>(0, 0.75f, true);    }

其中指定了最大缓存不能超过maxSize这个数值,其次,初始化了一个LinkedHashMap集合,我们知道,LinkedHashMap就是在HashMap中维护了一个链表记录插入的记录,如果我们把最后一个参数设置为true,那么我们取出的值就是我们按我们访问的顺序去取的。

另外,还有个方法是必须要实现的:

   /**     * Returns the size of the entry for {@code key} and {@code value} in     * user-defined units.  The default implementation returns 1 so that size     * is the number of entries and max size is the maximum number of entries.     *     * <p>An entry's size must not change while it is in the cache.     */    protected int sizeOf(K key, V value) { return 1;    }

这个方法是测量我们我们实体的大小,不实现,它就会默认实现1了。所以,这个是硬要求。


放入缓存

 public final V put(K key, V value) {        if (key == null || value == null) {            throw new NullPointerException("key == null || value == null");        }        V previous;        synchronized (this) {            putCount++;            size += safeSizeOf(key, value);            previous = map.put(key, value);            if (previous != null) {                size -= safeSizeOf(key, previous);            }        }        if (previous != null) {            entryRemoved(false, key, previous, value);        }        trimToSize(maxSize);        return previous;    }

如果插入了空的key和value,它就会抛出异常。然后,放入的计数器加一,内存计数器加上新的Entry的大小。如果内存中已经存在这个值了,那么,我们的的Put操作算是覆盖操作,所以,我们得减去,刚才内存计数器加上的值。接下来,调用了

   if (previous != null) {            entryRemoved(false, key, previous, value);        }

有意思的是这个方法是一个空实现,是留给我们覆盖用的,相当于一个回调把。

    protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {}

大概意思就是说,当我们Put一个实体进去,如果是Map中是有相同key值的话,那么,我们相当于从内存中抹去了一个实体部分,什么key,oldValue,newValue都会给我们,evicted给了个false,先记着,继续看。接下来就调用了trimToSize()方法

    public void trimToSize(int maxSize) {        while (true) {            K key;            V value;            synchronized (this) {                if (size < 0 || (map.isEmpty() && size != 0)) {                    throw new IllegalStateException(getClass().getName()                            + ".sizeOf() is reporting inconsistent results!");                }                if (size <= maxSize || map.isEmpty()) {                    break;                }                Map.Entry<K, V> toEvict = map.entrySet().iterator().next();                key = toEvict.getKey();                value = toEvict.getValue();                map.remove(key);                size -= safeSizeOf(key, value);                evictionCount++;            }            entryRemoved(true, key, value, null);        }    }

这个方法就是一直循环,清理老的实体部分,直到满足,我们的LruCache所占内存小于我们初始化给定的maxSize或者是我们内内部的LinkedHashMap为空为止。有意思的是,每移除一个实体,又会调用entryRemoved()方法,只不过参数变成了 true,key,value,null。


获取缓存

public final V get(K key) {
if (key == null) {
throw new NullPointerException(“key == null”);
}

    V mapValue;    synchronized (this) {        mapValue = map.get(key);        if (mapValue != null) {            hitCount++;            return mapValue;        }        missCount++;    }    /*     * Attempt to create a value. This may take a long time, and the map     * may be different when create() returns. If a conflicting value was     * added to the map while create() was working, we leave that value in     * the map and release the created value.     */    V createdValue = create(key);    if (createdValue == null) {        return null;    }    synchronized (this) {        createCount++;        mapValue = map.put(key, createdValue);        if (mapValue != null) {            // There was a conflict so undo that last put            map.put(key, mapValue);        } else {            size += safeSizeOf(key, createdValue);        }    }    if (mapValue != null) {        entryRemoved(false, key, createdValue, mapValue);        return mapValue;    } else {        trimToSize(maxSize);        return createdValue;    }}

第一步,如果传进来的参数是null的话,那么抛出个异常。第二步,如果缓存找到了,那么命中的计数器加一,返回我们从内存中找到的缓存。否则,没有找到的计数器加一,继续。第三步有点意思:

    V createdValue = create(key);        if (createdValue == null) {            return null;        }

如果LruCache中没有实体部分的话,就创建一个实体部分,好,我们点进create()这个方法:

 protected V create(K key) {        return null;    }

所以,第三步默认就是返回个null给我们了。这不是忽悠我们么,一起看看注释:

If a value for key exists in the cache when this method
returns, the created value will be released with entryRemoved and discarded.

原来,这个方法和是我们之前分析的entryRemoved和硬盘缓存结合起来用的,所以,知道之前,为什么要把抹去的实体部分和新增的实体部分回调给我们了吧?好,我们接着继续看,第四步,

   synchronized (this) {            createCount++;            mapValue = map.put(key, createdValue);            if (mapValue != null) {                // There was a conflict so undo that last put                map.put(key, mapValue);            } else {                size += safeSizeOf(key, createdValue);            }        }

我们把新创建的实体给放到我们的LinkedHashMap里面去,如果mapValue不为空的话,说明,之前LinkedHashMap已经有值了,我们也许是覆盖错了(可能key相同,但是value不相同),那么,我们再重新把覆盖的值给放进去,如果为空的话,那么,我们就要在所占内存的基础上加上这个值了。继续:

  if (mapValue != null) {            entryRemoved(false, key, createdValue, mapValue);            return mapValue;        } else {            trimToSize(maxSize);            return createdValue;        }

第六步就很简单了,就是根据我们的mapValue去选择是回调还是清理数据了。


清理缓存

清理一个实体就不用多说了,就是根据key来移除LinkedHashMap中的实体部分。

  public final V remove(K key) {        if (key == null) {            throw new NullPointerException("key == null");        }        V previous;        synchronized (this) {            previous = map.remove(key);            if (previous != null) {                size -= safeSizeOf(key, previous);            }        }        if (previous != null) {            entryRemoved(false, key, previous, null);        }        return previous;    }

我们来看看,清除所有缓存的方法:

    public final void evictAll() {        trimToSize(-1); // -1 will evict 0-sized elements    }

给trimToSize方法传入了-1,点进去:

public void trimToSize(int maxSize) {        while (true) {            K key;            V value;            synchronized (this) {                if (size < 0 || (map.isEmpty() && size != 0)) {                    throw new IllegalStateException(getClass().getName()                            + ".sizeOf() is reporting inconsistent results!");                }                if (size <= maxSize || map.isEmpty()) {                    break;                }                Map.Entry<K, V> toEvict = map.entrySet().iterator().next();                key = toEvict.getKey();                value = toEvict.getValue();                map.remove(key);                size -= safeSizeOf(key, value);                evictionCount++;            }            entryRemoved(true, key, value, null);        }    }

原来,我们的maxSize就变成了-1,这样就会一直循环删除缓存了,直到LinkedHashMap中的size为0为止。


DiskLruCache


初始化

  private DiskLruCache(File directory, int appVersion, int valueCount, long maxSize) {        this.directory = directory;        this.appVersion = appVersion;        this.journalFile = new File(directory, JOURNAL_FILE);        this.journalFileTmp = new File(directory, JOURNAL_FILE_TMP);        this.valueCount = valueCount;        this.maxSize = maxSize;    }

DiskLruCache的构造方法私有化,意味着,我们不能直接从外界new出这个对象,要借助open()来完成对它的初始化:

    public static DiskLruCache open(File directory, int appVersion, int valueCount, long maxSize)            throws IOException {        if (maxSize <= 0) {            throw new IllegalArgumentException("maxSize <= 0");        }        if (valueCount <= 0) {            throw new IllegalArgumentException("valueCount <= 0");        }        // prefer to pick up where we left off        DiskLruCache cache = new DiskLruCache(directory, appVersion, valueCount, maxSize);        if (cache.journalFile.exists()) {            try {                cache.readJournal();                cache.processJournal();                cache.journalWriter = new BufferedWriter(new FileWriter(cache.journalFile, true),                        IO_BUFFER_SIZE);                return cache;            } catch (IOException journalIsCorrupt) {// System.logW("DiskLruCache " + directory + " is corrupt: "// + journalIsCorrupt.getMessage() + ", removing");                cache.delete();            }        }        // create a new empty cache        directory.mkdirs();        cache = new DiskLruCache(directory, appVersion, valueCount, maxSize);        cache.rebuildJournal();        return cache;    }

一起看看它到底做了哪些工作,首先,构造出了DiskLruCache对象,其中传入的参数,文档上写的也很是详细:

 * @param directory a writable directory * @param appVersion * @param valueCount the number of values per cache entry. Must be positive. * @param maxSize the maximum number of bytes this cache should use to store

继续往下看,如果DiskLruCache对象的日志文件存在的话,我们先读取日志文件,然后处理日志文件,然后生成一个BufferedWriter用于对日志文件的操作,最后返回DiskLruCache对象。如果,日志文件不存在,那么就默认初始化,也没什么难点。


日志文件

 * This cache uses a journal file named "journal". A typical journal file * looks like this: *     libcore.io.DiskLruCache *     1 *     100 *     2 * *     CLEAN 3400330d1dfc7f3f7f4b8d4d803dfcf6 832 21054 *     DIRTY 335c4c6028171cfddfbaae1a9c313c52 *     CLEAN 335c4c6028171cfddfbaae1a9c313c52 3934 2342 *     REMOVE 335c4c6028171cfddfbaae1a9c313c52 *     DIRTY 1ab96a171faeeee38496d8b330771a7a *     CLEAN 1ab96a171faeeee38496d8b330771a7a 1600 234 *     READ 335c4c6028171cfddfbaae1a9c313c52 *     READ 3400330d1dfc7f3f7f4b8d4d803dfcf6

我们参照这日志文件的格式,来分析对它的操作。

1.新建日志文件

   private synchronized void rebuildJournal() throws IOException {        if (journalWriter != null) {            journalWriter.close();        }        Writer writer = new BufferedWriter(new FileWriter(journalFileTmp), IO_BUFFER_SIZE);        writer.write(MAGIC);        writer.write("\n");        writer.write(VERSION_1);        writer.write("\n");        writer.write(Integer.toString(appVersion));        writer.write("\n");        writer.write(Integer.toString(valueCount));        writer.write("\n");        writer.write("\n");        for (Entry entry : lruEntries.values()) {            if (entry.currentEditor != null) {                writer.write(DIRTY + ' ' + entry.key + '\n');            } else {                writer.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n');            }        }        writer.close();        journalFileTmp.renameTo(journalFile);        journalWriter = new BufferedWriter(new FileWriter(journalFile, true), IO_BUFFER_SIZE);    }

首先把头信息写进入,然后内存中的LinkedHashMap的数据写到日志文件中,包括了“脏数据”和“干净的数据”。最后生成了一个日志文件的Writer。

2.读取日志文件

 private void readJournal() throws IOException {        InputStream in = new BufferedInputStream(new FileInputStream(journalFile), IO_BUFFER_SIZE);        try {            String magic = readAsciiLine(in);            String version = readAsciiLine(in);            String appVersionString = readAsciiLine(in);            String valueCountString = readAsciiLine(in);            String blank = readAsciiLine(in);            if (!MAGIC.equals(magic)                    || !VERSION_1.equals(version)                    || !Integer.toString(appVersion).equals(appVersionString)                    || !Integer.toString(valueCount).equals(valueCountString)                    || !"".equals(blank)) {                throw new IOException("unexpected journal header: ["                        + magic + ", " + version + ", " + valueCountString + ", " + blank + "]");            }            while (true) {                try {                    readJournalLine(readAsciiLine(in));                } catch (EOFException endOfJournal) {                    break;                }            }        } finally {            closeQuietly(in);        }    }

首先读取头部信息,如果不是我们的日志文件,那么就抛出个异常,如果是日志文件,就一行一行的读取内容了,

   private void readJournalLine(String line) throws IOException {        String[] parts = line.split(" ");        if (parts.length < 2) {            throw new IOException("unexpected journal line: " + line);        }        String key = parts[1];        if (parts[0].equals(REMOVE) && parts.length == 2) {            lruEntries.remove(key);            return;        }        Entry entry = lruEntries.get(key);        if (entry == null) {            entry = new Entry(key);            lruEntries.put(key, entry);        }        if (parts[0].equals(CLEAN) && parts.length == 2 + valueCount) {            entry.readable = true;            entry.currentEditor = null;            entry.setLengths(copyOfRange(parts, 2, parts.length));        } else if (parts[0].equals(DIRTY) && parts.length == 2) {            entry.currentEditor = new Editor(entry);        } else if (parts[0].equals(READ) && parts.length == 2) {            // this work was already done by calling lruEntries.get()        } else {            throw new IOException("unexpected journal line: " + line);        }    }

我们的内容都是通过空格键来区分的,parts数组的长度肯定大于2,这很好理解,如果小于2,说明这不是一个符合要求的行。如果读取的操作指令是REMOVE,则我们需要在LinkedHashMap删除掉这个记录。根据我们的日志文件的信息从内存中查找操作记录,接下来:

      if (parts[0].equals(CLEAN) && parts.length == 2 + valueCount) {            entry.readable = true;            entry.currentEditor = null;            entry.setLengths(copyOfRange(parts, 2, parts.length));        } else if (parts[0].equals(DIRTY) && parts.length == 2) {            entry.currentEditor = new Editor(entry);        } else if (parts[0].equals(READ) && parts.length == 2) {            // this work was already done by calling lruEntries.get()        } else {            throw new IOException("unexpected journal line: " + line);        }

如果是“干净的数据”,代码很好理解,至于“脏数据”和 READ标识的数据,我们就要从下面看了。

3.处理日志文件

    private void processJournal() throws IOException {        deleteIfExists(journalFileTmp);        for (Iterator<Entry> i = lruEntries.values().iterator(); i.hasNext(); ) {            Entry entry = i.next();            if (entry.currentEditor == null) {                for (int t = 0; t < valueCount; t++) {                    size += entry.lengths[t];                }            } else {                entry.currentEditor = null;                for (int t = 0; t < valueCount; t++) {                    deleteIfExists(entry.getCleanFile(t));                    deleteIfExists(entry.getDirtyFile(t));                }                i.remove();            }        }    }

可以看到,通过我们读取日志文件,在内存中生成了对日志文件的映射,下面的代码,我们很清楚它到底要做什么了,根据currentEditor是否为空,来判断数据到底可用,如果currentEditor为空,那么我们记录它的大小,如果不为空,则删除文件。


写入缓存文件

 public Editor edit(String key) throws IOException {        return edit(key, ANY_SEQUENCE_NUMBER);    }    private synchronized Editor edit(String key, long expectedSequenceNumber) throws IOException {        checkNotClosed();        validateKey(key);        Entry entry = lruEntries.get(key);        if (expectedSequenceNumber != ANY_SEQUENCE_NUMBER                && (entry == null || entry.sequenceNumber != expectedSequenceNumber)) {            return null; // snapshot is stale        }        if (entry == null) {            entry = new Entry(key);            lruEntries.put(key, entry);        } else if (entry.currentEditor != null) {            return null; // another edit is in progress        }        Editor editor = new Editor(entry);        entry.currentEditor = editor;        // flush the journal before creating files to prevent file leaks        journalWriter.write(DIRTY + ' ' + key + '\n');        journalWriter.flush();        return editor;    }

在正式读取文件之前,我们需要为它生成一个Editor对象,而这个Editor就是针对一个Entry来操作的,把通过和文件的key获取的Entry的currentEditor赋值成我们刚刚生成的editor,并且把在日志文件中记录这个key的文件是个“脏数据”。接下来,在Editor内部就要生成一个OutputStream用于写入文件了:

    public OutputStream newOutputStream(int index) throws IOException {            synchronized (DiskLruCache.this) {                if (entry.currentEditor != this) {                    throw new IllegalStateException();                }                return new FaultHidingOutputStream(new FileOutputStream(entry.getDirtyFile(index)));            }        }

关于,index我们默认是传入0的,因为一个实体Entry内部可能会维护多个value,就是说,一个key,我们可以保存多个file,这个和初始化的valueCount属性有关系。

当写入操作结束后,或者写入异常,我们会调用这两个方法:

  /** * Commits this edit so it is visible to readers. This releases the * edit lock so another edit may be started on the same key. */        public void commit() throws IOException {            if (hasErrors) {                completeEdit(this, false);                remove(entry.key); // the previous entry is stale            } else {                completeEdit(this, true);            }        }        /** * Aborts this edit. This releases the edit lock so another edit may be * started on the same key. */        public void abort() throws IOException {            completeEdit(this, false);        }

至于hasErrors这个表示当我们写入的时候发生异常,我们就会把这个改为true。然而不管成功失败与否,都调用了completeEdit()这个方法:

   private synchronized void completeEdit(Editor editor, boolean success) throws IOException {        Entry entry = editor.entry;        if (entry.currentEditor != editor) {            throw new IllegalStateException();        }        // if this edit is creating the entry for the first time, every index must have a value        if (success && !entry.readable) {            for (int i = 0; i < valueCount; i++) {                if (!entry.getDirtyFile(i).exists()) {                    editor.abort();                    throw new IllegalStateException("edit didn't create file " + i);                }            }        }        for (int i = 0; i < valueCount; i++) {            File dirty = entry.getDirtyFile(i);            if (success) {                if (dirty.exists()) {                    File clean = entry.getCleanFile(i);                    dirty.renameTo(clean);                    long oldLength = entry.lengths[i];                    long newLength = clean.length();                    entry.lengths[i] = newLength;                    size = size - oldLength + newLength;                }            } else {                deleteIfExists(dirty);            }        }        redundantOpCount++;        entry.currentEditor = null;        if (entry.readable | success) {            entry.readable = true;            journalWriter.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n');            if (success) {                entry.sequenceNumber = nextSequenceNumber++;            }        } else {            lruEntries.remove(entry.key);            journalWriter.write(REMOVE + ' ' + entry.key + '\n');        }        if (size > maxSize || journalRebuildRequired()) {            executorService.submit(cleanupCallable);        }    }

第一步,editor肯定是当前操作的Entry的currentEditor,不然就不合法了;第二步,success && ! entry.readable 这个判断条件是什么意思呢?我们知道,当我们打开DiskLruCache的时候,会根据我们的日志文件来设置给他true的,就是说如果它的readable属性为true,肯定是有一次写入成功的操作的,如果不为true,则是第一次提交,那么,我们肯定有dirtyFile的,因为我们向外提供的输出流是针对dirtyFile操作的,

  public OutputStream newOutputStream(int index) throws IOException {            synchronized (DiskLruCache.this) {                if (entry.currentEditor != this) {                    throw new IllegalStateException();                }                return new FaultHidingOutputStream(new FileOutputStream(entry.getDirtyFile(index)));            }        }

这样是不是一目了然了。第三步,如果成功了,我们就把dirtyFile改成cleanFile,提供外界读取,并累计缓存文件的大小。当然,如果读取失败,我们就删除掉dirtyFile。第四步,就是将readable设置为true,当然这是针对第一次写入操作的,然后就写入,CLEAD指令,如果写入不成功则写入REMOVE操作。第五步,就是针对缓存所做的处理:

    private final Callable<Void> cleanupCallable = new Callable<Void>() {        @Override public Void call() throws Exception {            synchronized (DiskLruCache.this) {                if (journalWriter == null) {                    return null; // closed                }                trimToSize();                if (journalRebuildRequired()) {                    rebuildJournal();                    redundantOpCount = 0;                }            }            return null;        }    };

至于,什么时候rebuild日志文件,看看journalRebuildRequired()方法:

   private boolean journalRebuildRequired() {        final int REDUNDANT_OP_COMPACT_THRESHOLD = 2000;        return redundantOpCount >= REDUNDANT_OP_COMPACT_THRESHOLD                && redundantOpCount >= lruEntries.size();    }

这个就是防止,日志文件过大的一种处理策略,redundantOpCount是针对日志文件的操作次数。


读取缓存文件

public synchronized Snapshot get(String key) throws IOException {    checkNotClosed();    validateKey(key);    Entry entry = lruEntries.get(key);    if (entry == null) {        return null;    }    if (!entry.readable) {        return null;    }    /*     * Open all streams eagerly to guarantee that we see a single published     * snapshot. If we opened streams lazily then the streams could come     * from different edits.     */    InputStream[] ins = new InputStream[valueCount];    try {        for (int i = 0; i < valueCount; i++) {            ins[i] = new FileInputStream(entry.getCleanFile(i));        }    } catch (FileNotFoundException e) {        // a file must have been deleted manually!        return null;    }    redundantOpCount++;    journalWriter.append(READ + ' ' + key + '\n');    if (journalRebuildRequired()) {        executorService.submit(cleanupCallable);    }    return new Snapshot(key, entry.sequenceNumber, ins);}

相对于写入缓存文件来说,读取就比较简单了。前面都是一些是否可读的判断,到我们的Entry的去找我们的cleanFile,将他们的输入流封装到Snapshot对象供外界读取。


移除缓存文件

   public synchronized boolean remove(String key) throws IOException {        checkNotClosed();        validateKey(key);        Entry entry = lruEntries.get(key);        if (entry == null || entry.currentEditor != null) {            return false;        }        for (int i = 0; i < valueCount; i++) {            File file = entry.getCleanFile(i);            if (!file.delete()) {                throw new IOException("failed to delete " + file);            }            size -= entry.lengths[i];            entry.lengths[i] = 0;        }        redundantOpCount++;        journalWriter.append(REMOVE + ' ' + key + '\n');        lruEntries.remove(key);        if (journalRebuildRequired()) {            executorService.submit(cleanupCallable);        }        return true;    }

分析完写入的操作,这个理解起来真的太简单了,无非就是删除实体部分的cleanFile,写入REMOVE指令,并且移除了LinkedHashMap中的数据。

总结

1、LruCache和DiskLruCache内部中都维护了一个LinkedHashMap,而LinkedHashMap中又维护了一条链表用于记录,插入或访问的顺序,我们根据这个特性,可以移除最不经常使用的实体部分。唯一不同的是,LruCache中value维护的就是真的实体部分,而DiskLruCache中value维护的是日志文件中的数据,我们根据DiskLruCache中维护的value去映射成一个个的实体部分,实体部分针对我们文件的操作,比如写入,读取。

2、需要注意的地方是,DiskLruCache是如何根据日志文件知道我们写入的状态呢,是根据日志文件中的指令,比如DIRTY,CLEAN,REMOVE等,如果初始化状态,我们对Entry进行了写入操作,那么写的文件是dirtyFile,只有成功之后,才将Entry的readable的表示改成true,把dirtyFile改成cleanFile,我们读取的时候就是根据readable是否为true,才去读取cleanFile的。

更多相关文章

  1. Android实时抓取日志,生成文件
  2. android assets/raw 大文件读取
  3. Android – 加载图片本缓存到内存与本地
  4. Android加载html实现文件上传功能
  5. Android中将assets中的文件拷贝到sd卡
  6. db文件查看工具SQLiteExpert
  7. android proc 虚拟文件系统
  8. Android 9 读写SD卡文件
  9. Android zip文件压缩解压缩

随机推荐

  1. 提高android ContentProvider的效率
  2. Android(安卓)ListView 特殊属性及用法
  3. android 7.0 相机,拍照 调裁切提示 “无法
  4. Android开发指南(42) —— Adding Custom
  5. Android(安卓)Studio 插件简介
  6. 5.3 ListView的HeaderView
  7. Android(安卓)中文 API (29) —— Compound
  8. Android(安卓)studio导出jar包
  9. android GUI反编译工具 APKDecoderV0.9
  10. Android源码分享-自动换行LinearLayout