图片的内存缓存实现

Image-Loader库有一个较完整的内存缓存实现,使用者可以根据需要选择已经实现的策略,也可以定制自己项目中需要的策略。

内存缓存实现代码在memory和memory.impl这两个包中,前者就是规范视图,后者是实现视图memory包中有MemoryCacheAware接口和BaseMemoryCache和LimitedMemoryCache两个抽象类,加上memory.impl包中的WeakMemoryCache类

MemoryCacheAware

MemoryCacheAware接口以泛型方式定义了实现缓存所需的基础规约,包括写缓存、读缓存、删除缓存和遍历缓存等:

/**  * Interface for memory cache  *  * @author Sergey Tarasevich (nostra13[at]gmail[dot]com)  * @since 1.0.0  */  public interface MemoryCacheAware<K, V> {      /**      * Puts value into cache by key      *      * @return <b>true</b> - if value was put into cache successfully, <b>false</b> - if value was <b>not</b> put into      *         cache      */      boolean put(K key, V value);        /** Returns value by key. If there is no value for key then null will be returned. */      V get(K key);        /** Removes item by key */      void remove(K key);        /** Returns all keys of cache */      Collection<K> keys();        /** Remove all items from cache */      void clear();  }

内存缓存依然使用哈希表来实现:

/** Stores not strong references to objects */  private final Map<K, Reference<V>> softMap = Collections.synchronizedMap(new HashMap<K, Reference<V>>());

BaseMemoryCache implements MemoryCacheAware

/**  * Base memory cache. Implements common functionality for memory cache. Provides object references (  * {@linkplain Reference not strong}) storing.  *  * @author Sergey Tarasevich (nostra13[at]gmail[dot]com)  * @since 1.0.0  */  public abstract class BaseMemoryCache<K, V> implements MemoryCacheAware<K, V> {        /** Stores not strong references to objects */      private final Map<K, Reference<V>> softMap = Collections.synchronizedMap(new HashMap<K, Reference<V>>());        @Override      public V get(K key) {          V result = null;          Reference<V> reference = softMap.get(key);          if (reference != null) {              result = reference.get();          }          return result;      }        @Override      public boolean put(K key, V value) {          softMap.put(key, createReference(value));          return true;      }        @Override      public void remove(K key) {          softMap.remove(key);      }        @Override      public Collection<K> keys() {          synchronized (softMap) {              return new HashSet<K>(softMap.keySet());          }      }        @Override      public void clear() {          softMap.clear();      }        /** Creates {@linkplain Reference not strong} reference of value */      protected abstract Reference<V> createReference(V value);  }

基本上就是HashMap的操作,由于具体Reference的实现需要看到底用的哪种引用,因此,这里将createReference定义成抽象函数,让BaseMemoryCache的子类去定制实现

WeakMemoryCache extends BaseMemoryCache

BaseMemoryCache的一个最简单的子类实现是WeakMemoryCache类,它仅仅是实现了createReference抽象方法,返回Bitmap对象的弱引用:

public class WeakMemoryCache extends BaseMemoryCache<String, Bitmap> {      @Override      protected Reference<Bitmap> createReference(Bitmap value) {          return new WeakReference<Bitmap>(value);      }  }

LimitedMemoryCache extends BaseMemoryCache

BaseMemoryCache的另一个子类是LimitedMemoryCache,它也是抽象类,定义了实现内存缓存限制策略的公共操作。既然要限制缓存大小,那么首先需要定义默认的缓存最大值,同时需要维护一个表示当前缓存已占用大小的变量,考虑到多线程问题,这个变量值得增减必须保证是原子的,因此,该变量类型选用AtomicInteger。如下所示:

private static final int MAX_NORMAL_CACHE_SIZE_IN_MB = 16;  private static final int MAX_NORMAL_CACHE_SIZE = MAX_NORMAL_CACHE_SIZE_IN_MB * 1024 * 1024;    private final int sizeLimit;    private final AtomicInteger cacheSize;

同时,为了计算已添加到缓存中的数据大小,需要新增一个指向强引用的数据结构来进行记录,而不是使用BaseMemoryCache中的softMap哈希表,因为softMap中存放的是Reference类,里面的数据可能会被GC回收,用来统计已占用大小时不准确。指向数据强引用的数据结构选用LinkedList,定义如下,同样采用线程安全版本。

/**  * Contains strong references to stored objects. Each next object is added last. If hard cache size will exceed  * limit then first object is deleted (but it continue exist at {@link #softMap} and can be collected by GC at any  * time)  */  private final List<V> hardCache = Collections.synchronizedList(new LinkedList<V>());

往LimitedMemoryCache中添加数据时,首先要计算出当前这个数据的大小(getSize()),然后加上已占用缓存大小后,跟缓存最大值相比较,超过缓存最大值时,就需要根据缓存清理策略(removeNext())删除以前的一些数据,直到添加数据后,总占用大小在最大值之内为止。上面提到的getSize()函数和removeNext函数需要子类定制,所以定义成了抽象函数。这是典型的模板方法模式的应用,模板方法模式定义一个操作中算法的步骤,而将一些步骤延迟到子类中。模板方法使得子类可以不改变一个算法的结构即可重定义该算法的某些特定步骤。下面的put函数中实现的就是算法的步骤,而getSize()和removeNext()函数则是子类可重定义的特定步骤。相关代码如下所示:

@Override  public boolean put(K key, V value) {      boolean putSuccessfully = false;      // Try to add value to hard cache      int valueSize = getSize(value);      int sizeLimit = getSizeLimit();      int curCacheSize = cacheSize.get();      if (valueSize < sizeLimit) {          while (curCacheSize + valueSize > sizeLimit) {              V removedValue = removeNext();              if (hardCache.remove(removedValue)) {                  curCacheSize = cacheSize.addAndGet(-getSize(removedValue));              }          }          hardCache.add(value);          cacheSize.addAndGet(valueSize);            putSuccessfully = true;      }      // Add value to soft cache      super.put(key, value);      return putSuccessfully;  }    protected abstract int getSize(V value);    protected abstract V removeNext();

FIFOLimitedMemoryCache extends LimitedMemoryCache

FIFO算法意思是在缓存超过最大值时,缓存清理策略是将最老的数据清理掉,因此使用队列这样的数据结构就可以很好的实现,在Java中,LinkedList类可以实现这样的功能。

private final List<Bitmap> queue = Collections.synchronizedList(new LinkedList<Bitmap>());

而父类LimitedMemoryCache中也定义了类似的数据结构hardCache专门用于计算缓存的数据大小,只不过它是私有的,子类不能使用。但这样就会造成在FIFOLimitedMemoryCache类中存在两个类似的数据结构,内存占用会变大,事实上,FIFOLimitedMemoryCache中的queue完全可以直接使用父类的hardCache,作者之所以没有直接使用,应该是考虑到代码整体类层次结构清晰的问题,毕竟hardCache专门是用于计算缓存大小的,子类实现方式很多种,很多子类并不需要直接使用hardCache,所以单独一个FIFOLimitedMemoryCache以空间来换取代码结构的清晰也是可以理解的。写缓存和清缓存代码如下:

@Override  public boolean put(String key, Bitmap value) {      if (super.put(key, value)) {          queue.add(value);          return true;      } else {          return false;      }  }    @Override  public void remove(String key) {      Bitmap value = super.get(key);      if (value != null) {          queue.remove(value);      }      super.remove(key);  }    @Override  public void clear() {      queue.clear();      super.clear();  }

前面说到的父类总共有3个抽象函数,需要子类予以实现,代码如下:

@Override  protected int getSize(Bitmap value) {      return value.getRowBytes() * value.getHeight();  }    @Override  protected Bitmap removeNext() {      return queue.remove(0);  }    @Override  protected Reference<Bitmap> createReference(Bitmap value) {      return new WeakReference<Bitmap>(value);  }

getSize函数中可以看到计算Bitmap占用字节大小的典型方法。removeNext函数中移除队列queue头部的数据,这个数据是最先进入队列的数据。而Bitmap的Reference使用的是弱引用,它和软引用的区别是,弱引用对象拥有更短暂的生命周期,在垃圾回收器线程扫描它所管辖的内存区域过程中,只要发现只具有弱引用的对象,那么不管当前内存空间是否足够,都会回收弱引用对象占用的内存。这样可以更好的防止出现OutOfMemoryError错误。

LargestLimitedMemoryCache extends LimitedMemoryCache

当缓存超出最大值限制时,清理策略是将占用空间最大的数据清理掉。因此,需要一个维护Bitmap对象到它占用大小的映射,这里使用的还是HashMap:

/**  * Contains strong references to stored objects (keys) and last object usage date (in milliseconds). If hard cache  * size will exceed limit then object with the least frequently usage is deleted (but it continue exist at  * {@link #softMap} and can be collected by GC at any time)  */  private final Map<Bitmap, Integer> valueSizes = Collections.synchronizedMap(new HashMap<Bitmap, Integer>());

同理,关键代码还是在父类几个抽象函数的实现,其中getSize()和createReference()函数实现和FIFOLimitedMemoryCache类一样,removeNext函数实现如下:

@Override  protected Bitmap removeNext() {      Integer maxSize = null;      Bitmap largestValue = null;      Set<Entry<Bitmap, Integer>> entries = valueSizes.entrySet();      synchronized (valueSizes) {          for (Entry<Bitmap, Integer> entry : entries) {              if (largestValue == null) {                  largestValue = entry.getKey();                  maxSize = entry.getValue();              } else {                  Integer size = entry.getValue();                  if (size > maxSize) {                      maxSize = size;                      largestValue = entry.getKey();                  }              }          }      }      valueSizes.remove(largestValue);      return largestValue;  }

基本原理是遍历valueSizes这个哈希表,比较并得到占用空间最大的Bitmap对象,然后删除它即可。这里要注意的一点是遍历时要加入synchronized同步机制。

LRULimitedMemoryCache extends LimitedMemoryCache

LRU即Least Recently Used,缓存清理策略是将最近最少使用的Bitmap对象删除掉。按Java中刚好有这样一个数据结构可以实现这个策略,它就是LinkedHashMap类。

private static final int INITIAL_CAPACITY = 10;  private static final float LOAD_FACTOR = 1.1f;    /** Cache providing Least-Recently-Used logic */  private final Map<String, Bitmap> lruCache = Collections.synchronizedMap(new LinkedHashMap<String, Bitmap>(INITIAL_CAPACITY, LOAD_FACTOR, true));

LinkedHashMap是HashMap的子类,注意它的构造函数第三个参数accessOrder,它定义了LinkedHashMap的排序模式,accessOrder为true时,表示LinkedHashMap中数据排序按照访问的顺序,当为false时,表示数据排序按照数据插入的顺序。而我们要实现LRU,就需要把accessOrder设置为true,同时,在读缓存时不能像FIFOLimitedMemoryCache类和LargestLimitedMemoryCache类一样使用父类BaseMemoryCache的get方法,而是应该重写该方法如下所示:

@Override  public Bitmap get(String key) {      lruCache.get(key); // call "get" for LRU logic      return super.get(key);  }

当accessOrder设置为true时,LinkedHashMap就维护了记录的访问顺序,这时使用Iterator 遍历LinkedHashMap时,先得到的记录肯定就是最近最不常使用的那个记录了,LRU算法水到渠成:

@Override  protected Bitmap removeNext() {      Bitmap mostLongUsedValue = null;      synchronized (lruCache) {          Iterator<Entry<String, Bitmap>> it = lruCache.entrySet().iterator();          if (it.hasNext()) {              Entry<String, Bitmap> entry = it.next();              mostLongUsedValue = entry.getValue();              it.remove();          }      }      return mostLongUsedValue;  }

UsingFreqLimitedMemoryCache extends LimitedMemoryCache

和LargestLimitedMemoryCache类实现类似,只不过一个是将占用空间最大的记录剔除,一个是将访问次数最少的记录剔除,所用数据结构自然类似:

/**  * Contains strong references to stored objects (keys) and last object usage date (in milliseconds). If hard cache  * size will exceed limit then object with the least frequently usage is deleted (but it continue exist at  * {@link #softMap} and can be collected by GC at any time)  */  private final Map<Bitmap, Integer> usingCounts = Collections.synchronizedMap(new HashMap<Bitmap, Integer>());

因为要记录访问次数,所以需要重写父类的get方法,每访问一次,就增加计数值:

@Override  public Bitmap get(String key) {      Bitmap value = super.get(key);      // Increment usage count for value if value is contained in hardCahe      if (value != null) {          Integer usageCount = usingCounts.get(value);          if (usageCount != null) {              usingCounts.put(value, usageCount + 1);          }      }      return value;  }

removeNext函数实现原理是遍历usingCounts哈希表中的记录,比较它们的访问次数,并选取访问次数最少的一个删除之,代码如下所示,同LargestLimitedMemoryCache类,要注意使用synchronized关键字同步保护。

@Override  protected Bitmap removeNext() {      Integer minUsageCount = null;      Bitmap leastUsedValue = null;      Set<Entry<Bitmap, Integer>> entries = usingCounts.entrySet();      synchronized (usingCounts) {          for (Entry<Bitmap, Integer> entry : entries) {              if (leastUsedValue == null) {                  leastUsedValue = entry.getKey();                  minUsageCount = entry.getValue();              } else {                  Integer lastValueUsage = entry.getValue();                  if (lastValueUsage < minUsageCount) {                      minUsageCount = lastValueUsage;                      leastUsedValue = entry.getKey();                  }              }          }      }      usingCounts.remove(leastUsedValue);      return leastUsedValue;  }

FuzzyKeyMemoryCache implements MemoryCacheAware

在之前实现内存缓存的映射时,是一个key对应一个value,而这个装饰者类提供的额外功能是:允许多个key对应同一个value,后面插入的key-value对会覆盖以前存在的key-value对。这个类主要用于Image-Loader库内部使用。在FuzzyKeyMemoryCache类的实现中,使用Comparator类实现多个key是否相等的判断,核心代码在put函数中:

/**  * Decorator for {@link MemoryCacheAware}. Provides special feature for cache: some different keys are considered as  * equals (using {@link Comparator comparator}). And when you try to put some value into cache by key so entries with  * "equals" keys will be removed from cache before.<br />  * <b>NOTE:</b> Used for internal needs. Normally you don't need to use this class.  *  * @author Sergey Tarasevich (nostra13[at]gmail[dot]com)  * @since 1.0.0  */  public class FuzzyKeyMemoryCache<K, V> implements MemoryCacheAware<K, V> {        private final MemoryCacheAware<K, V> cache;      private final Comparator<K> keyComparator;        public FuzzyKeyMemoryCache(MemoryCacheAware<K, V> cache, Comparator<K> keyComparator) {          this.cache = cache;          this.keyComparator = keyComparator;      }        @Override      public boolean put(K key, V value) {          // Search equal key and remove this entry          synchronized (cache) {              K keyToRemove = null;              for (K cacheKey : cache.keys()) {                  if (keyComparator.compare(key, cacheKey) == 0) {                      keyToRemove = cacheKey;                      break;                  }              }              if (keyToRemove != null) {                  cache.remove(keyToRemove);              }          }          return cache.put(key, value);      }        @Override      public V get(K key) {          return cache.get(key);      }        @Override      public void remove(K key) {          cache.remove(key);      }        @Override      public void clear() {          cache.clear();      }        @Override      public Collection<K> keys() {          return cache.keys();      }  }

LimitedAgeMemoryCache implements MemoryCacheAware

在前面介绍过的MemoryCacheAware接口实现类中,有时可能需要添加这样一个额外的功能:当缓存的对象存在超过一定时间时,将其清理掉,LimitedAgeMemoryCache这个装饰者类就是实现这样一个功能。首先,实现一个缓存对象key到已存活时间的映射表,在获取缓存对象时,判断该对象是否超过最大存活时间,如果是则将其移除。代码如下所示:

/**  * Decorator for {@link MemoryCacheAware}. Provides special feature for cache: if some cached object age exceeds defined  * value then this object will be removed from cache.  *  * @author Sergey Tarasevich (nostra13[at]gmail[dot]com)  * @see MemoryCacheAware  * @since 1.3.1  */  public class LimitedAgeMemoryCache<K, V> implements MemoryCacheAware<K, V> {        private final MemoryCacheAware<K, V> cache;        private final long maxAge;      private final Map<K, Long> loadingDates = Collections.synchronizedMap(new HashMap<K, Long>());        /**      * @param cache  Wrapped memory cache      * @param maxAge Max object age <b>(in seconds)</b>. If object age will exceed this value then it'll be removed from      *               cache on next treatment (and therefore be reloaded).      */      public LimitedAgeMemoryCache(MemoryCacheAware<K, V> cache, long maxAge) {          this.cache = cache;          this.maxAge = maxAge * 1000; // to milliseconds      }        @Override      public boolean put(K key, V value) {          boolean putSuccesfully = cache.put(key, value);          if (putSuccesfully) {              loadingDates.put(key, System.currentTimeMillis());          }          return putSuccesfully;      }        @Override      public V get(K key) {          Long loadingDate = loadingDates.get(key);          if (loadingDate != null && System.currentTimeMillis() - loadingDate > maxAge) {              cache.remove(key);              loadingDates.remove(key);          }            return cache.get(key);      }        @Override      public void remove(K key) {          cache.remove(key);          loadingDates.remove(key);      }        @Override      public Collection<K> keys() {          return cache.keys();      }        @Override      public void clear() {          cache.clear();          loadingDates.clear();      }  }

我是天王盖地虎的分割线

参考:http://blog.csdn.net/asce1885/article/details/10306623

更多相关文章

  1. C语言函数的递归(上)
  2. Android(安卓)Studio的.gitignore以及gitignore无效的解决
  3. Android(安卓)Camera 明细
  4. android 3d 问题汇总1
  5. Android延迟执行 handler类的实现
  6. Android(安卓)bluetooth start discovery analyse
  7. setAnimationStyle实现的popwindow显示消失的动画效果
  8. Android(安卓)Audio AudioTrack::write函数
  9. Android(安卓)JVM的运行过程

随机推荐

  1. Android入门第十六篇之Style与Theme
  2. Gallery放大缩小
  3. Android拍照实现方式概述
  4. Android的StrictMode
  5. Android(安卓)Studio如何删除module
  6. android中禁止横屏和竖屏切换
  7. Android(安卓)使用RecyclerVeiw加载复杂
  8. Android(安卓)UI开发第二十八篇——Fragm
  9. Unity的春天
  10. Android访问中央气象台的天气预报API得到