Android Binder之native层解析
1 前言
Binder是Android系统中提供的一种进程间通信方式,Android是基于Linux内核的,除了Binder外,还有其他的进程间
通信方式,可以参考笔者之前的文章,
Linux自带多种进程间IPC,为什么Google却用Binder作为Android主要的进程间IPC?。
对我们好多开发者来讲,最难也最想掌握的恐怕就是Binder了,Android整个系统可以看作一个基于Binder的C/S架构,ta将系统的各个部分连接起来,可见,ta的作用有多重要。
Binder作为一个C/S架构的IPC通信方式,除了Client端跟Server端之外,还有一个全局的ServiceManager用来统筹全局,ta的作用是相当于一个大管家,来管理系统中的各种服务,他们之间的关系可以用下图来表示:
从上图我们可以得出以下结论:
- Service先要向SM注册服务,所以Service是SM的客户端,而SM就是服务端了
- Client想要使用某个服务,必须向SM获取该服务,所以Client是SM的客户端,而SM就是服务端了
- Client根据从SM获取到Service信息可以跟Service所在进程通信,所以Client也是Service的客户端
- 最后一点,也是最重要的一点,三者之间的通信都是基于Binder的,所以从任何一个路线分析,都可以揭秘Binder的奥秘。
2 mediaserver的注册
frameworks\av\media\mediaserver\main_mediaserver.cpp
mediaserver注册的入口函数位于main_mediaserver.cpp的main方法中。我们来看下ta的实现
int main(int argc __unused, char **argv __unused){ signal(SIGPIPE, SIG_IGN); sp proc(ProcessState::self());//1 sp sm(defaultServiceManager());//2 ALOGI("ServiceManager: %p", sm.get()); ... MediaPlayerService::instantiate();//3 ... ProcessState::self()->startThreadPool();//4 IPCThreadState::self()->joinThreadPool();//5 }
- 注释1 创建ProcessState对象
- 注释2 获取IServiceManager
- 注释3 注册mediaserver到ServiceManager,主切入点
- 注释4 创建线程池
- 注释5 加入到线程池
2.1 ProcessState
frameworks/native/libs/binder/ProcessState.cpp
sp ProcessState::self(){ Mutex::Autolock _l(gProcessMutex); if (gProcess != NULL) { return gProcess; } gProcess = new ProcessState("/dev/binder"); return gProcess;}
获得ProcessState对象。self采用单例模式,从而保证一个进程只有一个ProcessState对象。下边来看下ProcessState的构造函数做了什么
ProcessState::ProcessState(const char *driver) : mDriverName(String8(driver)) , mDriverFD(open_driver(driver))//1 , mVMStart(MAP_FAILED)//2 , mThreadCountLock(PTHREAD_MUTEX_INITIALIZER) , mThreadCountDecrement(PTHREAD_COND_INITIALIZER) , mExecutingThreadsCount(0) , mMaxThreads(DEFAULT_MAX_BINDER_THREADS) , mStarvationStartTimeMs(0) , mManagesContexts(false) , mBinderContextCheckFunc(NULL) , mBinderContextUserData(NULL) , mThreadPoolStarted(false) , mThreadPoolSeq(1){ if (mDriverFD >= 0) { // mmap the binder, providing a chunk of virtual address space to receive transactions. mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);//3 if (mVMStart == MAP_FAILED) { // *sigh* ALOGE("Using %s failed: unable to mmap transaction memory.\n", mDriverName.c_str()); close(mDriverFD); mDriverFD = -1; mDriverName.clear(); } } LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened. Terminating.");}
- 注释1 打开/dev/binder设备
- 注释2 初始化binder映射到内存的地址
- 注释3 通过mmap函数,分配一块内存来接收数据,并将/dev/binder/映射到内存,返回映射的起始地址。
来看下open_driver做了什么
#define DEFAULT_MAX_BINDER_THREADS 15static int open_driver(const char *driver){ int fd = open(driver, O_RDWR | O_CLOEXEC);//1 if (fd >= 0) { int vers = 0; status_t result = ioctl(fd, BINDER_VERSION, &vers); if (result == -1) { ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno)); close(fd); fd = -1; } if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) { ALOGE("Binder driver protocol(%d) does not match user space protocol(%d)! ioctl() return value: %d", vers, BINDER_CURRENT_PROTOCOL_VERSION, result); close(fd); fd = -1; } size_t maxThreads = DEFAULT_MAX_BINDER_THREADS; result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);//2 if (result == -1) { ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno)); } } else { ALOGW("Opening '%s' failed: %s\n", driver, strerror(errno)); } return fd;}
- 注释1 通过open函数打开binder设备
- 注释2 通过ioctl函数告诉binder驱动binder设备最大支持的线程数为15个
到此,ProcessState::self就分析完了ta主要做了如下工作
- 打开/dev/binder,这样就相当于跟内核有了交互的通道
- 通过mmap函数,binde驱动会分配一块内存来接收数据
- 采用单例模式,所以每个进程只打开设备一次
分析完ProcessState,接下来来看下defaultServiceManager
2.2 defaultServiceManager
frameworks\native\libs\binder\IServiceManager.cpp
sp defaultServiceManager(){ if (gDefaultServiceManager != NULL) return gDefaultServiceManager; { AutoMutex _l(gDefaultServiceManagerLock); while (gDefaultServiceManager == NULL) { gDefaultServiceManager = interface_cast( ProcessState::self()->getContextObject(NULL)); if (gDefaultServiceManager == NULL) sleep(1); } } return gDefaultServiceManager;}
从defaultServiceManager函数可以看出,还是运用了单例模式,创建gDefaultServiceManager主要是interface_cast(ProcessState::self()->getContextObject(NULL)),我们来看下他的实现,先看下ProcessState::self()->getContextObject(NULL),从前编的分析我们可知,ProcessState::self创建了ProcessState对象,所以我们主要来看下getContextObject(NULL),注意ta的参数是NULL
sp ProcessState::getContextObject(const sp& /*caller*/){ return getStrongProxyForHandle(0);}sp ProcessState::getStrongProxyForHandle(int32_t handle){ sp result; int count = 0; ... handle_entry* e = lookupHandleLocked(handle);//1 if (e != NULL) { // We need to create a new BpBinder if there isn't currently one, OR we // are unable to acquire a weak reference on this current one. See comment // in getWeakProxyForHandle() for more info about this. IBinder* b = e->binder; if (b == NULL || !e->refs->attemptIncWeak(this)) { if (handle == 0) { // Special case for context manager... // The context manager is the only object for which we create // a BpBinder proxy without already holding a reference. // Perform a dummy transaction to ensure the context manager // is registered before we create the first local reference // to it (which will occur when creating the BpBinder). // If a local reference is created for the BpBinder when the // context manager is not present, the driver will fail to // provide a reference to the context manager, but the // driver API does not return status. // // Note that this is not race-free if the context manager // dies while this code runs. // // TODO: add a driver API to wait for context manager, or // stop special casing handle 0 for context manager and add // a driver API to get a handle to the context manager with // proper reference counting. Parcel data; status_t status = IPCThreadState::self()->transact( 0, IBinder::PING_TRANSACTION, data, NULL, 0); if (status == DEAD_OBJECT) { mLock.unlock(); return NULL; } } b = BpBinder::create(handle);//2 e->binder = b; if (b) e->refs = b->getWeakRefs(); result = b; } else { // This little bit of nastyness is to allow us to add a primary // reference to the remote proxy when this team doesn't have one // but another team is sending the handle to us. result.force_set(b); e->refs->decWeak(this); } } mLock.unlock(); return result;}
- 注释1 根据索引查找对应资源项。
- 注释2 创建BpBinder
BpBinder是什么?ta是客户端用来跟服务端进行交互的代理类,继承于IBinder
分析到这里,可以得出interface_cast(ProcessState::self()->getContextObject(NULL))相当于
interface_cast(BpBinder(0)),接着再来看下interface_cast做了什么工作
frameworks\native\libs\binder\include\binder\IInterface.h
templateinline sp interface_cast(const sp& obj){ return INTERFACE::asInterface(obj);}
仅仅是一个模板函数,ta等价于如下代码
inline sp interface_cast(const sp& obj){ return IServiceManager::asInterface(obj);}
所以代码现在转到IServiceManager.h里边
class IServiceManager : public IInterface{public: DECLARE_META_INTERFACE(ServiceManager)//关键的一个宏 ... /** * Retrieve an existing service, blocking for a few seconds * if it doesn't yet exist. */ virtual sp getService( const String16& name) const = 0; /** * Retrieve an existing service, non-blocking. */ virtual sp checkService( const String16& name) const = 0; /** * Register a service. */ virtual status_t addService(const String16& name, const sp& service, bool allowIsolated = false, int dumpsysFlags = DUMP_FLAG_PRIORITY_DEFAULT) = 0; /** * Return list of all existing services. */ virtual Vector listServices(int dumpsysFlags = DUMP_FLAG_PRIORITY_ALL) = 0; ...};
来看下这个宏做了什么替换
#define DECLARE_META_INTERFACE(INTERFACE) \ static const ::android::String16 descriptor; \ static ::android::sp asInterface( \ const ::android::sp<::android::IBinder>& obj); \ virtual const ::android::String16& getInterfaceDescriptor() const; \ I##INTERFACE(); \ virtual ~I##INTERFACE(); \替换后相当于 \ static const ::android::String16 descriptor; \ static ::android::sp asInterface( \ const ::android::sp<::android::IBinder>& obj); \ virtual const ::android::String16& getInterfaceDescriptor() const; \ IServiceManager(); \ virtual ~IServiceManager(); \
DECLARE_META_INTERFACE相当于声明了函数跟变量,还是要来看下它的具体实现,在IServiceManager.cpp文件里边我们找到了IMPLEMENT_META_INTERFACE这个宏,看来又是替换,来看下他的实现
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \ const ::android::String16 I##INTERFACE::descriptor(NAME); \ const ::android::String16& \ I##INTERFACE::getInterfaceDescriptor() const { \ return I##INTERFACE::descriptor; \ } \ ::android::sp I##INTERFACE::asInterface( \ const ::android::sp<::android::IBinder>& obj) \ { \ ::android::sp intr; \ if (obj != NULL) { \ intr = static_cast( \ obj->queryLocalInterface( \ I##INTERFACE::descriptor).get()); \ if (intr == NULL) { \ intr = new Bp##INTERFACE(obj); \ } \ } \ return intr; \ } \ I##INTERFACE::I##INTERFACE() { } \ I##INTERFACE::~I##INTERFACE() { } \ 替换后如下 const ::android::String16 IServiceManager::descriptor("android.os.IServiceManager"); const ::android::String16& IServiceManager::getInterfaceDescriptor() const { return IServiceManager::descriptor; } ::android::sp IServiceManager::asInterface( const ::android::sp<::android::IBinder>& obj) { ::android::sp intr; if (obj != NULL) { intr = static_cast( obj->queryLocalInterface( IServiceManager::descriptor).get()); if (intr == NULL) { intr = new BpServiceManager(obj); } } return intr; } IServiceManager::IServiceManager() { } IServiceManager::~IServiceManager() { }
讲过层层的转换,可以得出
gDefaultServiceManager = interface_cast(
ProcessState::self()->getContextObject(NULL)) 相当于 gDefaultServiceManager = new BpServiceManager(new BpBinder(0))。
所以可以得出,defaultServiceManager得到了BpServiceManager。来看下BpServiceManager的构造函数
@BpServiceManager.cppexplicit BpServiceManager(const sp& impl) : BpInterface(impl)//impl就是BpBinder { }@Interface.htemplateinline BpInterface::BpInterface(const sp& remote) : BpRefBase(remote){}@Binder.cppBpRefBase::BpRefBase(const sp& o) : mRemote(o.get()), mRefs(NULL), mState(0){ extendObjectLifetime(OBJECT_LIFETIME_WEAK); if (mRemote) { mRemote->incStrong(this); // Removed on first IncStrong(). mRefs = mRemote->createWeak(this); // Held for our entire lifetime. }}
经过上边的层层调用,我们可知,BpBinder是IBinder的子类,并且mRemote就是BpBinder。
到此为止,我们只是打开了binder设备,并没有发现与之交互的地方,所以,我们继续往下边看,来看下MediaPlayerService的注册
2.3 MediaPlayerService::instantiate()
frameworks\av\media\libmediaplayerservice\MediaPlayerService.cpp
void MediaPlayerService::instantiate() { defaultServiceManager()->addService( String16("media.player"), new MediaPlayerService());}
从前边的分析可得,defaultServiceManager()相当于BpServiceManager,我们直接来看BpServiceManager的addService方法,如下
virtual status_t addService(const String16& name, const sp& service, bool allowIsolated, int dumpsysPriority) { Parcel data, reply;//Parce相当于数据包 data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); data.writeStrongBinder(service); data.writeInt32(allowIsolated ? 1 : 0); data.writeInt32(dumpsysPriority); //remote()返回的mRemote,也就是BpBinder status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); return err == NO_ERROR ? reply.readExceptionCode() : err; }
下面来看下BpBinder的transact。前边说过,在BpBinder中并没有发现ta与Binder的交互,那ta是如何与Binder通信的?看了transact函数,或许你就明白了,下边我们来看下
status_t BpBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){ // Once a binder has died, it will never come back to life. if (mAlive) { status_t status = IPCThreadState::self()->transact( mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0; return status; } return DEAD_OBJECT;}
咦,原来BpBinder只是一个道具,ta把transact的工作交给了IPCThreadState,我们来看下IPCThreadState到底做了什么:
IPCThreadState* IPCThreadState::self(){ if (gHaveTLS) {//第一次进来为falserestart: const pthread_key_t k = gTLS; IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k); if (st) return st; return new IPCThreadState;//初始化IPCThreadState } if (gShutdown) { ALOGW("Calling IPCThreadState::self() during shutdown is dangerous, expect a crash.\n"); return NULL; } pthread_mutex_lock(&gTLSMutex); if (!gHaveTLS) {//首次进入 gHaveTLS为false int key_create_value = pthread_key_create(&gTLS, threadDestructor);//创建线程的TLS if (key_create_value != 0) { pthread_mutex_unlock(&gTLSMutex); ALOGW("IPCThreadState::self() unable to create TLS key, expect a crash: %s\n", strerror(key_create_value)); return NULL; } gHaveTLS = true; } pthread_mutex_unlock(&gTLSMutex); goto restart;}
TLS是指Thread local storage(线程本地储存空间),每个线程都拥有自己的TLS,并且是私有空间,线程之间不会共享。通过pthread_getspecific/pthread_setspecific函数可以获取/设置这些空间中的内容。从线程本地存储空间中获得保存在其中的IPCThreadState对象。
接下来看下IPCThreadState的构造函数
IPCThreadState::IPCThreadState() : mProcess(ProcessState::self()),//1 mStrictModePolicy(0), mLastTransactionBinderFlags(0){ pthread_setspecific(gTLS, this);//2 clearCaller(); mIn.setDataCapacity(256);//3 mOut.setDataCapacity(256);//4}
- 注释1 设置mProcess对象,每个进程只有一个
- 注释2 设置该线程的IPCThreadState对象,线程间不共享
- 注释3 设置接收缓冲区的大小为256字节
- 注释4 设置发送缓冲区的大小为256字节
接着来看下transact函数
status_t IPCThreadState::transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){ status_t err; flags |= TF_ACCEPT_FDS; err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);//1 if (err != NO_ERROR) { if (reply) reply->setError(err); return (mLastError = err); } if ((flags & TF_ONE_WAY) == 0) { if (reply) { err = waitForResponse(reply);//2 } else { Parcel fakeReply; err = waitForResponse(&fakeReply); } #if 0 if (code == 4) { // relayout ALOGI("<<<<<< RETURNING transaction 4"); } else { ALOGI("<<<<<< RETURNING transaction %d", code); } #endif } else { err = waitForResponse(NULL, NULL);//3 } return err;}
- 注释1 将要发送的数据封装起来
- 注释2 等待结果
- 注释3 包含TF_ONE_WAY标志,不需要等待reply的场景
先来看下writeTransactionData的实现,如下所示
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer){ binder_transaction_data tr;//binder通信的数据结构 tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */ tr.target.handle = handle;//handle用来标识目的端 tr.code = code;//消息码,code=ADD_SERVICE_TRANSACTION tr.flags = binderFlags; tr.cookie = 0; tr.sender_pid = 0; tr.sender_euid = 0; const status_t err = data.errorCheck(); if (err == NO_ERROR) { tr.data_size = data.ipcDataSize(); tr.data.ptr.buffer = data.ipcData(); tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t); tr.data.ptr.offsets = data.ipcObjects(); } else if (statusBuffer) { tr.flags |= TF_STATUS_CODE; *statusBuffer = err; tr.data_size = sizeof(status_t); tr.data.ptr.buffer = reinterpret_cast(statusBuffer); tr.offsets_size = 0; tr.data.ptr.offsets = 0; } else { return (mLastError = err); } mOut.writeInt32(cmd);//cmd = BC_TRANSACTION mOut.write(&tr, sizeof(tr));//将binder_transaction_data数据写到mOut里边 return NO_ERROR;}
看完上边writeTransactionData函数,可得知,此函数主要是将binder通信的数据结构以及cmd写到mOut(Parcel)里边。接下来看下发送数据跟接收数据,在waitForResponse里边
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult){ uint32_t cmd; int32_t err; while (1) { if ((err=talkWithDriver()) < NO_ERROR) break;//1 err = mIn.errorCheck(); if (err < NO_ERROR) break; if (mIn.dataAvail() == 0) continue; cmd = (uint32_t)mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing waitForResponse Command: " << getReturnString(cmd) << endl; } switch (cmd) { case BR_TRANSACTION_COMPLETE: if (!reply && !acquireResult) goto finish; break; ... default: err = executeCommand(cmd);//2 if (err != NO_ERROR) goto finish; break; } }finish: if (err != NO_ERROR) { if (acquireResult) *acquireResult = err; if (reply) reply->setError(err); mLastError = err; } return err;}
注释1处talkWithDriver,太直接了吧,直接与驱动对话,来看下他到底做了什么
status_t IPCThreadState::talkWithDriver(bool doReceive){ if (mProcess->mDriverFD <= 0) { return -EBADF; } binder_write_read bwr; // Is the read buffer empty? const bool needRead = mIn.dataPosition() >= mIn.dataSize(); // We don't want to write anything if we are still reading // from data left in the input buffer and the caller // has requested to read the next data. const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0; bwr.write_size = outAvail; bwr.write_buffer = (uintptr_t)mOut.data(); // This is what we'll read. if (doReceive && needRead) { //接收数据缓冲区的填充,以后收到数据都填在mIn里边 bwr.read_size = mIn.dataCapacity(); bwr.read_buffer = (uintptr_t)mIn.data(); } else { bwr.read_size = 0; bwr.read_buffer = 0; } // Return immediately if there is nothing to do. //读写缓冲区为空,直接返回 if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR; bwr.write_consumed = 0; bwr.read_consumed = 0; status_t err; do {#if defined(__ANDROID__) //通过ioctl直接与Binder驱动通信 if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) err = NO_ERROR; else err = -errno;#else err = INVALID_OPERATION;#endif if (mProcess->mDriverFD <= 0) { err = -EBADF; } IF_LOG_COMMANDS() { alog << "Finished read/write, write size = " << mOut.dataSize() << endl; } } while (err == -EINTR); if (err >= NO_ERROR) { if (bwr.write_consumed > 0) { if (bwr.write_consumed < mOut.dataSize()) mOut.remove(0, bwr.write_consumed); else { mOut.setDataSize(0); processPostWriteDerefs(); } } if (bwr.read_consumed > 0) { mIn.setDataSize(bwr.read_consumed); mIn.setDataPosition(0); } return NO_ERROR; } return err;}
通过对talkWithDriver函数的分析,我们可知,数据的交换是直接通过ioctl直接与binder驱动通信的,mProcess->mDriverFD就是之前在ProcessState中打开的binder的文件描述符。
我们接着来看waitForResponse函数的注释2,通过talkWithDriver已经将数据发送出去了,假设数据马上收到了回复,该怎么处理,来看executeCommand函数,实现如下:
status_t IPCThreadState::executeCommand(int32_t cmd){ BBinder* obj; RefBase::weakref_type* refs; status_t result = NO_ERROR; switch ((uint32_t)cmd) { case BR_ERROR: result = mIn.readInt32(); break; ... case BR_TRANSACTION: { binder_transaction_data tr; result = mIn.read(&tr, sizeof(tr)); ALOG_ASSERT(result == NO_ERROR, "Not enough command data for brTRANSACTION"); if (result != NO_ERROR) break; Parcel buffer; buffer.ipcSetDataReference( reinterpret_cast(tr.data.ptr.buffer), tr.data_size, reinterpret_cast(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), freeBuffer, this); const pid_t origPid = mCallingPid; const uid_t origUid = mCallingUid; const int32_t origStrictModePolicy = mStrictModePolicy; const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags; mCallingPid = tr.sender_pid; mCallingUid = tr.sender_euid; mLastTransactionBinderFlags = tr.flags; //ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid); Parcel reply; status_t error; if (tr.target.ptr) { // We only have a weak reference on the target object, so we must first try to // safely acquire a strong reference before doing anything else with it. if (reinterpret_cast( tr.target.ptr)->attemptIncStrong(this)) { error = reinterpret_cast(tr.cookie)->transact(tr.code, buffer, &reply, tr.flags);//1 reinterpret_cast(tr.cookie)->decStrong(this); } else { error = UNKNOWN_TRANSACTION; } } else { error = the_context_object->transact(tr.code, buffer, &reply, tr.flags); } if ((tr.flags & TF_ONE_WAY) == 0) { LOG_ONEWAY("Sending reply to %d!", mCallingPid); if (error < NO_ERROR) reply.setError(error); sendReply(reply, 0); } else { LOG_ONEWAY("NOT sending reply to %d!", mCallingPid); } mCallingPid = origPid; mCallingUid = origUid; mStrictModePolicy = origStrictModePolicy; mLastTransactionBinderFlags = origTransactionBinderFlags; } break; case BR_DEAD_BINDER: { BpBinder *proxy = (BpBinder*)mIn.readPointer(); proxy->sendObituary(); mOut.writeInt32(BC_DEAD_BINDER_DONE); mOut.writePointer((uintptr_t)proxy); } break; .... return result;}
注释1出我们看到了BBinder,ta是系统中众多BnXXX的父类,当收到binder驱动返回的数据,会调用这里,然后会回调BnXXX
的onTransact完成进程间通信,这里相当于是服务端。
2.4 ProcessState::self()->startThreadPool()
frameworks\native\libs\binder\ProcessState.cpp
void ProcessState::startThreadPool(){ AutoMutex _l(mLock); if (!mThreadPoolStarted) {//如果已经调用过startThreadPool,那该函数就没什么实质性的意义了 mThreadPoolStarted = true; spawnPooledThread(true); }}void ProcessState::spawnPooledThread(bool isMain){ if (mThreadPoolStarted) { String8 name = makeBinderThreadName(); ALOGV("Spawning new pooled thread, name=%s\n", name.string()); sp t = new PoolThread(isMain);//1 t->run(name.string());//2 }}
- 注释1 创建线程池
- 注释2 运行线程池
class PoolThread : public Thread{public: explicit PoolThread(bool isMain) : mIsMain(isMain) { }protected: virtual bool threadLoop() { IPCThreadState* ipc = IPCThreadState::self(); if(ipc) ipc->joinThreadPool(mIsMain);//1 return false; } const bool mIsMain;};
最终会调用到joinThreadPool,在main_mediaserver.cpp最后也会调用到此函数,我们来看下他到底做了什么
void IPCThreadState::joinThreadPool(bool isMain){ LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid()); mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER); status_t result; do { processPendingDerefs(); // now get the next command to be processed, waiting if necessary result = getAndExecuteCommand();//1 if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) { ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting", mProcess->mDriverFD, result); abort(); } // Let this thread exit the thread pool if it is no longer // needed and it is not the main process thread. if(result == TIMED_OUT && !isMain) { break; } } while (result != -ECONNREFUSED && result != -EBADF); mOut.writeInt32(BC_EXIT_LOOPER); talkWithDriver(false);}经过上边注释1的调用,函数走到如下getAndExecuteCommand函数status_t IPCThreadState::getAndExecuteCommand(){ status_t result; int32_t cmd; result = talkWithDriver();//1 if (result >= NO_ERROR) { size_t IN = mIn.dataAvail(); if (IN < sizeof(int32_t)) return result; cmd = mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing top-level Command: " << getReturnString(cmd) << endl; } pthread_mutex_lock(&mProcess->mThreadCountLock); mProcess->mExecutingThreadsCount++; if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads && mProcess->mStarvationStartTimeMs == 0) { mProcess->mStarvationStartTimeMs = uptimeMillis(); } pthread_mutex_unlock(&mProcess->mThreadCountLock); result = executeCommand(cmd);//2 pthread_mutex_lock(&mProcess->mThreadCountLock); mProcess->mExecutingThreadsCount--; if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads && mProcess->mStarvationStartTimeMs != 0) { int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs; if (starvationTimeMs > 100) { ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms", mProcess->mMaxThreads, starvationTimeMs); } mProcess->mStarvationStartTimeMs = 0; } pthread_cond_broadcast(&mProcess->mThreadCountDecrement); pthread_mutex_unlock(&mProcess->mThreadCountLock); } return result;}
从getAndExecuteCommand函数的注释1跟2处,可以看出,原来joinThreadPool函数不断的跟binder驱动通信,来找点事情来做。到此为止,整个IPC通信过程已经分析完毕,你明白了么?
总结
通过前边的分析,我们对native层的binder架构有了一定的了解。首先ProcessState初始化打开/dev/binder设备,然后通过mmap将binder设备映射到内存;然后通过分析defaultServiceManager得知,ta最终转化为BpServiceManger,通过参数BpBinder的transact函数最红进入到IPCThreadState::self()->transact函数,然后通过ioctl直接与Binder驱动通信,这就完成了进程间通信。整个过程可用下图来表示
------------------------------------------------------ END ---------------------------------------------------------------------
更多相关文章
- C语言函数以及函数的使用
- 《Android开发艺术探索》第十章Android的消息机制+第十一章Andro
- 理解Android回调函数
- 《Android Dev Guide》系列教程5:Android进程和线程
- Android 多线程之 Handler 基本使用
- Android线程池
- Android Java 线程池 ScheduledThreadPoolExecutor源码篇
- Android中Toast如何在子线程中调用
- 多线程例子 android camera capture