前文Android匿名共享内存(Ashmem)原理分析了匿名共享内存,它最主要的作用就是View视图绘制,Android视图是按照一帧一帧显示到屏幕的,而每一帧都会占用一定的存储空间,通过Ashmem机制APP与SurfaceFlinger共享绘图数据,提高图形处理性能,本文就看Android是怎么利用Ashmem分配及绘制的:

View视图内存的分配

前文Window添加流程中描述了:在添加窗口的时候,WMS会为APP分配一个WindowState,以标识当前窗口并用于窗口管理,同时向SurfaceFlinger端请求分配Layer抽象图层,在SurfaceFlinger分配Layer的时候创建了两个比较关键的Binder对象,用于填充WMS端Surface,一个是sp handle:是每个窗口标识的句柄,将来WMS同SurfaceFlinger通信的时候方便找到对应的图层。另一个是sp gbp :共享内存分配的关键对象,同时兼具Binder通信的功能,用来传递指令共享内存的句柄,注意,这里只是抽象创建了对象,并未真正分配每一帧的内存,内存的分配要等到真正绘制的时候才会申请,首先看一下分配流程:

  • 分配的时机:什么时候分配
  • 分配的手段:如何分配
  • 传递的方式:如何跨进程传递

Surface被抽象成一块画布,只要拥有Surface就可以绘图,其根本原理就是Surface握有可以绘图的一块内存,这块内存是APP端在需要的时候,通过sp gbp向SurfaceFlinger申请的,那么首先看一下APP端如何获得sp gbp这个服务代理的,之后再看如何利用它申请内存,在WMS利用向SurfaceFlinger申请填充Surface的时候,会请求SurfaceFlinger分配这把剑,并将其句柄交给自己

sp SurfaceComposerClient::createSurface(        const String8& name,  uint32_t w, uint32_t h, PixelFormat format, uint32_t flags){      sp sur;      ...      if (mStatus == NO_ERROR) {        sp handle;        sp gbp;                status_t err = mClient->createSurface(name, w, h, format, flags,                &handle, &gbp);                 if (err == NO_ERROR) {            sur = new SurfaceControl(this, handle, gbp);        }    }    return sur;}

看关键点1,这里其实就是建立了一个sp gbp容器,并请求SurfaceFlinger分配填充内容,SurfaceFlinger收到请求后会为WMS建立与APP端对应的Layer,同时为其分配sp gbp,并填充到Surface中返回给APP,

status_t SurfaceFlinger::createNormalLayer(const sp& client,        const String8& name, uint32_t w, uint32_t h, uint32_t flags, PixelFormat& format,        sp* handle, sp* gbp, sp* outLayer){    ...        *outLayer = new Layer(this, client, name, w, h, flags);    status_t err = (*outLayer)->setBuffers(w, h, format, flags);        if (err == NO_ERROR) {        *handle = (*outLayer)->getHandle();        *gbp = (*outLayer)->getProducer();    }  return err;}    void Layer::onFirstRef() {        // Creates a custom BufferQueue for SurfaceFlingerConsumer to use        sp producer;        sp consumer;        BufferQueue::createBufferQueue(&producer, &consumer);                  mProducer = new MonitoredProducer(producer, mFlinger);        mSurfaceFlingerConsumer = new SurfaceFlingerConsumer(consumer, mTextureName);        mSurfaceFlingerConsumer->setConsumerUsageBits(getEffectiveUsage(0));        mSurfaceFlingerConsumer->setContentsChangedListener(this);        mSurfaceFlingerConsumer->setName(mName);            #ifdef TARGET_DISABLE_TRIPLE_BUFFERING    #warning "disabling triple buffering"        mSurfaceFlingerConsumer->setDefaultMaxBufferCount(2);    #else        mSurfaceFlingerConsumer->setDefaultMaxBufferCount(3);    #endif            const sp hw(mFlinger->getDefaultDisplayDevice());        updateTransformHint(hw);    }

通过TARGET_DISABLE_TRIPLE_BUFFERING看是用三缓冲还是双缓冲,MaxBufferCoun。

void BufferQueue::createBufferQueue(sp* outProducer,        sp* outConsumer,        const sp& allocator) {    sp core(new BufferQueueCore(allocator));    sp producer(new BufferQueueProducer(core));    sp consumer(new BufferQueueConsumer(core));    *outProducer = producer;    *outConsumer = consumer;}

从上面两个函数可以很清楚的看到Producer/Consumer的模型原样,也就说每个图层Layer都有自己的producer/ consumer,sp gbp对应的其实是BufferQueueProducer,而BufferQueueProducer是一个Binder通信对象,在服务端是:

class BufferQueueProducer : public BnGraphicBufferProducer,                            private IBinder::DeathRecipient {}

在APP端是

class BpGraphicBufferProducer : public BpInterface{}

IGraphicBufferProducer Binder实体在SurfaceFlinger中创建后,打包到Surface对象,并通过binder通信传递给APP端,APP段通过反序列化将其恢复出来,如下:

status_t Surface::readFromParcel(const Parcel* parcel, bool nameAlreadyRead) {    if (parcel == nullptr) return BAD_VALUE;    status_t res = OK;    if (!nameAlreadyRead) {        name = readMaybeEmptyString16(parcel);        // Discard this for now        int isSingleBuffered;        res = parcel->readInt32(&isSingleBuffered);        if (res != OK) {            return res;        }    }    sp binder;    res = parcel->readStrongBinder(&binder);    if (res != OK) return res;       graphicBufferProducer = interface_cast(binder);    return OK;}

自此,APP端就获得了申请内存的句柄BpGraphicBufferProducer,它真正发挥作用是在第一次绘图时,为了方便理解,先看软件绘制:ViewRootImpl中的drawSoftware

   private boolean drawSoftware(Surface surface, AttachInfo attachInfo, int xoff, int yoff,            boolean scalingRequired, Rect dirty) {            final Canvas canvas;        try {            final int left = dirty.left;            final int top = dirty.top;            final int right = dirty.right;            final int bottom = dirty.bottom;                        canvas = mSurface.lockCanvas(dirty);        try {           try {                              mView.draw(canvas);            }                      } finally {            try {                          surface.unlockCanvasAndPost(canvas);            } catch (IllegalArgumentException e) {}

先看关键点1,内存的分配时机其实就在这里,直接进入到native层

static jlong nativeLockCanvas(JNIEnv* env, jclass clazz,        jlong nativeObject, jobject canvasObj, jobject dirtyRectObj) {    sp surface(reinterpret_cast(nativeObject));    ...    status_t err = surface->lock(&outBuffer, dirtyRectPtr);    ...    sp lockedSurface(surface);    lockedSurface->incStrong(&sRefBaseOwner);    return (jlong) lockedSurface.get();}

surface.cpp的lock会进一步调用dequeueBuffer函数来请求分配内存:

int Surface::dequeueBuffer(android_native_buffer_t** buffer, int* fenceFd) {    ...    int buf = -1;    sp fence;    nsecs_t now = systemTime();        status_t result = mGraphicBufferProducer->dequeueBuffer(&buf, &fence,            reqWidth, reqHeight, reqFormat, reqUsage);    ...    if ((result & IGraphicBufferProducer::BUFFER_NEEDS_REALLOCATION) || gbuf == 0) {                    result = mGraphicBufferProducer->requestBuffer(buf, &gbuf);   ...}

最终会调用BpGraphicBufferProducer的dequeueBuffer向服务端请求分配内存,这里用到了匿名共享内存的知识,在Linux中一切都是文件,共享内存也看成一个文件。分配成功之后,需要跨进程传递tmpfs临时文件的描述符fd。先看下申请的逻辑:

    class BpGraphicBufferProducer : public BpInterface{    virtual status_t dequeueBuffer(int *buf, sp* fence, bool async,            uint32_t w, uint32_t h, uint32_t format, uint32_t usage) {        Parcel data, reply;        data.writeInterfaceToken(IGraphicBufferProducer::getInterfaceDescriptor());        data.writeInt32(async);        data.writeInt32(w);        data.writeInt32(h);        data.writeInt32(format);        data.writeInt32(usage);        //通过BpBinder将要什么的buffer的相关参数保存到data,发送给BBinder        status_t result = remote()->transact(DEQUEUE_BUFFER, data, &reply);        if (result != NO_ERROR) {            return result;        }        //BBinder给BpBinder返回了一个int,并不是缓冲区的内存        *buf = reply.readInt32();        bool nonNull = reply.readInt32();        if (nonNull) {            *fence = new Fence();            reply.read(**fence);        }        result = reply.readInt32();        return result;    }}

在client侧,也就是BpGraphicBufferProducer侧,通过DEQUEUE_BUFFER后核心只返回了一个*buf = reply.readInt32();其实是数组mSlots的下标,在BufferQueue中有个和mSlots对应的数组,一一对应,

status_t BnGraphicBufferProducer::onTransact(    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){      case DEQUEUE_BUFFER: {            CHECK_INTERFACE(IGraphicBufferProducer, data, reply);            bool async      = data.readInt32();            uint32_t w      = data.readInt32();            uint32_t h      = data.readInt32();            uint32_t format = data.readInt32();            uint32_t usage  = data.readInt32();            int buf;            sp fence;            //调用BufferQueue的dequeueBuffer            //也返回一个int的buf            int result = dequeueBuffer(&buf, &fence, async, w, h, format, usage);            //将buf和fence写入parcel,通过binder传给client            reply->writeInt32(buf);            reply->writeInt32(fence != NULL);            if (fence != NULL) {                reply->write(*fence);            }            reply->writeInt32(result);            return NO_ERROR;}

可以看到BnGraphicBufferProducer端获取到长宽及格式,之后利用BufferQueueProducer的dequeueBuffer来申请内存,内存可能已经申请,也可能未申请,未申请,则直接申请新内存,每个surface可以对应多块(6.0好像是64)内存:

status_t BufferQueueProducer::dequeueBuffer(int *outSlot,        sp *outFence, uint32_t width, uint32_t height,        PixelFormat format, uint32_t usage) {    ...        sp graphicBuffer(mCore->mAllocator->createGraphicBuffer(                width, height, format, usage,                {mConsumerName.string(), mConsumerName.size()}, &error));

mCore其实就是上面的BufferQueueCore,mCore->mAllocator = new GraphicBufferAlloc(),最终会利用GraphicBufferAlloc对象分配共享内存:

sp GraphicBufferAlloc::createGraphicBuffer(uint32_t width,        uint32_t height, PixelFormat format, uint32_t usage,        std::string requestorName, status_t* error) {                sp graphicBuffer(new GraphicBuffer(            width, height, format, usage, std::move(requestorName)));    status_t err = graphicBuffer->initCheck();    return graphicBuffer;}

从上面看到,直接new GraphicBuffer新建图像内存,

GraphicBuffer::GraphicBuffer(uint32_t inWidth, uint32_t inHeight,        PixelFormat inFormat, uint32_t inUsage, std::string requestorName)    : BASE(), mOwner(ownData), mBufferMapper(GraphicBufferMapper::get()),      mInitCheck(NO_ERROR), mId(getUniqueId()), mGenerationNumber(0){    ...    handle = NULL;    mInitCheck = initSize(inWidth, inHeight, inFormat, inUsage,            std::move(requestorName));}status_t GraphicBuffer::initSize(uint32_t inWidth, uint32_t inHeight,        PixelFormat inFormat, uint32_t inUsage, std::string requestorName){    GraphicBufferAllocator& allocator = GraphicBufferAllocator::get();    uint32_t outStride = 0;        status_t err = allocator.allocate(inWidth, inHeight, inFormat, inUsage,            &handle, &outStride, mId, std::move(requestorName));    if (err == NO_ERROR) {        width = static_cast(inWidth);        height = static_cast(inHeight);        format = inFormat;        usage = static_cast(inUsage);        stride = static_cast(outStride);    }    return err;} status_t GraphicBufferAllocator::allocate(uint32_t width, uint32_t height,        PixelFormat format, uint32_t usage, buffer_handle_t* handle,        uint32_t* stride, uint64_t graphicBufferId, std::string requestorName){     ...    auto descriptor = mDevice->createDescriptor();    auto error = descriptor->setDimensions(width, height);    error = descriptor->setFormat(static_cast(format));    error = descriptor->setProducerUsage(            static_cast(usage));    error = descriptor->setConsumerUsage(            static_cast(usage));        error = mDevice->allocate(descriptor, graphicBufferId, handle);    error = mDevice->getStride(*handle, stride);    ...    return NO_ERROR;}

上面代码的mDevice就是利用hw_get_module及gralloc1_open获取到的硬件抽象层device,hw_get_module装载HAL模块,会加载相应的.so文件gralloc.default.so,它实现位于 hardware/libhardware/modules/gralloc.cpp中,最后将device映射的函数操作加载进来。这里我们关心的是allocate函数,先分析普通图形缓冲区的分配,它最终会调用gralloc_alloc_buffer()利用匿名共享内存进行分配,之前的文章Android匿名共享内存(Ashmem)原理分析了Android是如何通过匿名共享内存进行通信的,这里就直接用了:

static int gralloc_alloc_buffer(alloc_device_t* dev,        size_t size, int usage, buffer_handle_t* pHandle){    int err = 0;    int fd = -1;    size = roundUpToPageSize(size);    // 创建共享内存,并且设定名字跟size    fd = ashmem_create_region("gralloc-buffer", size);    if (err == 0) {        private_handle_t* hnd = new private_handle_t(fd, size, 0);        gralloc_module_t* module = reinterpret_cast(                dev->common.module);         // 执行mmap,将内存映射到自己的进程        err = mapBuffer(module, hnd);        if (err == 0) {            *pHandle = hnd;        }    }    return err;}

mapBuffer会进一步调用ashmem的驱动,在tmpfs新建文件,同时开辟虚拟内存,

int mapBuffer(gralloc_module_t const* module,            private_handle_t* hnd)    {        void* vaddr;         // vaddr有个毛用?        return gralloc_map(module, hnd, &vaddr);    }static int gralloc_map(gralloc_module_t const* module,        buffer_handle_t handle,        void** vaddr){    private_handle_t* hnd = (private_handle_t*)handle;    if (!(hnd->flags & private_handle_t::PRIV_FLAGS_FRAMEBUFFER)) {        size_t size = hnd->size;        void* mappedAddress = mmap(0, size,                PROT_READ|PROT_WRITE, MAP_SHARED, hnd->fd, 0);        if (mappedAddress == MAP_FAILED) {            return -errno;        }        hnd->base = intptr_t(mappedAddress) + hnd->offset;    }    *vaddr = (void*)hnd->base;    return 0;}

View绘制内存的传递

分配之后,会继续利用BpGraphicBufferProducer的requestBuffer,申请将共享内存给映射到当前进程:

virtual status_t requestBuffer(int bufferIdx, sp* buf) {    Parcel data, reply;    data.writeInterfaceToken(IGraphicBufferProducer::getInterfaceDescriptor());    data.writeInt32(bufferIdx);    status_t result =remote()->transact(REQUEST_BUFFER, data, &reply);    if (result != NO_ERROR) {        return result;    }    bool nonNull = reply.readInt32();    if (nonNull) {        *buf = new GraphicBuffer();        reply.read(**buf);    }    result = reply.readInt32();    return result;}

private_handle_t对象用来抽象图形缓冲区,其中存储着与共享内存对应tmpfs文件的fd,GraphicBuffer对象会通过序列化,将这个fd会利用Binder通信传递给App进程,APP端获取到fd之后,便可以同mmap将共享内存映射到自己的进程空间,进而进行图形绘制。等到APP端对GraphicBuffer的反序列化的时候,会将共享内存mmap到当前进程空间:

status_t Parcel::read(Flattenable& val) const  {      // size      const size_t len = this->readInt32();      const size_t fd_count = this->readInt32();      // payload      void const* buf = this->readInplace(PAD_SIZE(len));      if (buf == NULL)          return BAD_VALUE;      int* fds = NULL;      if (fd_count) {          fds = new int[fd_count];      }      status_t err = NO_ERROR;      for (size_t i=0 ; ireadFileDescriptor());          if (fds[i] < 0) err = BAD_VALUE;      }      if (err == NO_ERROR) {          err = val.unflatten(buf, len, fds, fd_count);      }      if (fd_count) {          delete [] fds;      }      return err;  }  

进而调用GraphicBuffer::unflatten:

status_t GraphicBuffer::unflatten(void const* buffer, size_t size,        int fds[], size_t count){   ...    mOwner = ownHandle;        if (handle != 0) {        status_t err = mBufferMapper.registerBuffer(handle);    }    return NO_ERROR;}

mBufferMapper.registerBuffer函数对应gralloc_register_buffer

struct private_module_t HAL_MODULE_INFO_SYM = {    .base = {        .common = {            .tag = HARDWARE_MODULE_TAG,            .version_major = 1,            .version_minor = 0,            .id = GRALLOC_HARDWARE_MODULE_ID,            .name = "Graphics Memory Allocator Module",            .author = "The Android Open Source Project",            .methods = &gralloc_module_methods        },        .registerBuffer = gralloc_register_buffer,        .unregisterBuffer = gralloc_unregister_buffer,        .lock = gralloc_lock,        .unlock = gralloc_unlock,    },    .framebuffer = 0,    .flags = 0,    .numBuffers = 0,    .bufferMask = 0,    .lock = PTHREAD_MUTEX_INITIALIZER,    .currentBuffer = 0,};

最后会调用gralloc_register_buffer,通过mmap真正将tmpfs文件映射到进程空间:

static int gralloc_register_buffer(gralloc_module_t const* module,                                   buffer_handle_t handle){    ...    if (cb->ashmemSize > 0 && cb->mappedPid != getpid()) {        void *vaddr;                int err = map_buffer(cb, &vaddr);        cb->mappedPid = getpid();    }    return 0;}

终于我们用到tmpfs中文件对应的描述符fd0->cb->fd

static int map_buffer(cb_handle_t *cb, void **vaddr){    if (cb->fd < 0 || cb->ashmemSize <= 0) {        return -EINVAL;    }    void *addr = mmap(0, cb->ashmemSize, PROT_READ | PROT_WRITE,                      MAP_SHARED, cb->fd, 0);    cb->ashmemBase = intptr_t(addr);    cb->ashmemBasePid = getpid();    *vaddr = addr;    return 0;}

到这里内存传递成功,App端就可以应用这块内存进行图形绘制了。

View绘制内存的使用

关于内存的使用,我们回到之前的Surface lock函数,内存经过反序列化,拿到内存地址后,会封装一个ANativeWindow_Buffer返回给上层调用:

status_t Surface::lock(        ANativeWindow_Buffer* outBuffer, ARect* inOutDirtyBounds){     ...        void* vaddr;                status_t res = backBuffer->lock(                GRALLOC_USAGE_SW_READ_OFTEN | GRALLOC_USAGE_SW_WRITE_OFTEN,                newDirtyRegion.bounds(), &vaddr);        if (res != 0) {            err = INVALID_OPERATION;        } else {            mLockedBuffer = backBuffer;            outBuffer->width  = backBuffer->width;            outBuffer->height = backBuffer->height;            outBuffer->stride = backBuffer->stride;            outBuffer->format = backBuffer->format;                            outBuffer->bits   = vaddr;        }    }    return err;}

ANativeWindow_Buffer的数据结构如下,其中bits字段与虚拟内存地址对应,

typedef struct ANativeWindow_Buffer {    // The number of pixels that are show horizontally.    int32_t width;    // The number of pixels that are shown vertically.    int32_t height;    // The number of *pixels* that a line in the buffer takes in    // memory.  This may be >= width.    int32_t stride;    // The format of the buffer.  One of WINDOW_FORMAT_*    int32_t format;    // The actual bits.    void* bits;        // Do not touch.    uint32_t reserved[6];} ANativeWindow_Buffer;

如何使用,看下Canvas的draw

static void nativeLockCanvas(JNIEnv* env, jclass clazz,        jint nativeObject, jobject canvasObj, jobject dirtyRectObj) {    sp surface(reinterpret_cast(nativeObject));    ...    status_t err = surface->lock(&outBuffer, &dirtyBounds);    ...        SkBitmap bitmap;    ssize_t bpr = outBuffer.stride * bytesPerPixel(outBuffer.format);        bitmap.setConfig(convertPixelFormat(outBuffer.format), outBuffer.width, outBuffer.height, bpr);        if (outBuffer.format == PIXEL_FORMAT_RGBX_8888) {        bitmap.setIsOpaque(true);    }        if (outBuffer.width > 0 && outBuffer.height > 0) {        bitmap.setPixels(outBuffer.bits);    } else {        // be safe with an empty bitmap.        bitmap.setPixels(NULL);    }        SkCanvas* nativeCanvas = SkNEW_ARGS(SkCanvas, (bitmap));    swapCanvasPtr(env, canvasObj, nativeCanvas);   ...}

对于2D绘图,会用skia库会填充Bitmap对应的共享内存,如此即可完成绘制,本文不深入Skia库,有兴趣自行分析。绘制完成后,通过unlock直接通知SurfaceFlinger服务进行图层合成。

Android View局部重绘的原理

拿TextView来说,如果内容发生了改变,就会触发重绘,加入当前视图中还包含其他View,这个时候,可能只会触发TextView及其父层级View的重绘,其他View不重绘,为什么呢?这个时候传递给SurfaceFlinger的UI数据如何保证完整呢?其实在lockCanvas的时候,默认是又一次数据拷贝的,也就是将之前绘制的UI数据拷贝到最新的申请内存中去,而新的重绘是从拷贝之后开始的,也就是在原来视图的基础上进行脏区域重绘:

status_t Surface::lock(        ANativeWindow_Buffer* outBuffer, ARect* inOutDirtyBounds){     status_t err = dequeueBuffer(&out, &fenceFd);    ALOGE_IF(err, "dequeueBuffer failed (%s)", strerror(-err));    if (err == NO_ERROR) {            sp backBuffer(GraphicBuffer::getSelf(out));        const Rect bounds(backBuffer->width, backBuffer->height);            ...        const sp& frontBuffer(mPostedBuffer);        const bool canCopyBack = (frontBuffer != 0 &&                backBuffer->width  == frontBuffer->width &&                backBuffer->height == frontBuffer->height &&                backBuffer->format == frontBuffer->format);        // 是否能够拷贝到当前backBuffer中来?必须两个样式一样,才能拷贝,如果不一样不用            if (canCopyBack) {            // copy the area that is invalid and not repainted this round            const Region copyback(mDirtyRegion.subtract(newDirtyRegion));            if (!copyback.isEmpty()) {                // 拷贝                copyBlt(backBuffer, frontBuffer, copyback, &fenceFd);            }        } else {            // 如果不能拷贝,那就整块绘制,终于找到了入口 入江口 入口啊            newDirtyRegion.set(bounds);            mDirtyRegion.clear();            Mutex::Autolock lock(mMutex);            for (size_t i=0 ; i

对于通过lockCanvas获取的内存,要么被上次绘制的UI数据填充,要么整体重绘,如果被上次填充,那么这次就只需要绘制脏区域相关的视图,这就是Android局部重绘的原理。

总结

Android View的绘制建立匿名共享内存的基础上,APP端与SurfaceFlinger通过共享内存的方式避免了View视图数据的拷贝,提高了系统同的视图处理能力。

作者:看书的小蜗牛
原文链接:Android窗口管理分析(4):Android View绘制内存的分配、传递、使用
仅供参考,欢迎指正

更多相关文章

  1. 5. Android(安卓)内存管理
  2. Android(安卓)高级面试题及答案,android试题及答案 如何对 Androi
  3. Android(安卓)OpenGl ES2.0编程_相关概念与绘制顶点
  4. Android内存管理机制
  5. Android视图绘制流程完全解析(一)
  6. android内存处理机制
  7. Android的内存泄漏和调试
  8. android,内存优化详解
  9. [Android]Android(安卓)如何绘制图表

随机推荐

  1. android TextView控件
  2. Android设置Settings实现:PreferenceActiv
  3. Android开发之数据保存技术(一)
  4. Android(安卓)AM命令行启动程序的方法
  5. mvc架构模式的运行原理,各自职责依赖注入
  6. 意派Epub360丨妇女节合成卡片H5案例,值得
  7. 【PuTTY】一个免费的SSH和Telnet客户端
  8. 【Telnet】Telnet安装与配置
  9. 澳门真人网址>17166918222>hn92.com
  10. Vue3父子组件之间的访问方式