这一章,我们通过分析如何获取MediaPlayerService,并调用其中的方法。我们以前面分析过的WifiDisplay的代码以起点开始分析。


获取Native Service

这是我们前面在RemoteDisplay中看过的一段代码:
static jint nativeListen(JNIEnv* env, jobject remoteDisplayObj, jstring ifaceStr) {    ScopedUtfChars iface(env, ifaceStr);    sp sm = defaultServiceManager();    sp service = interface_cast(            sm->getService(String16("media.player")));    if (service == NULL) {        ALOGE("Could not obtain IMediaPlayerService from service manager");        return 0;    }    sp client(new NativeRemoteDisplayClient(env, remoteDisplayObj));    sp display = service->listenForRemoteDisplay(            client, String8(iface.c_str()));

这里首先通过defaultServiceManager()获取一个BpServiceManager对象,然后调用它的getService方法,并通过它的返回构造一个BpMeidaPlayerService。前面分析过interface_cast,它调用asInterface()函数,展开后如下:
android::sp IMediaPlayerService::asInterface(                          const android::sp& obj)                     {                                                                         android::sp intr;                                       if (obj != NULL) {                                                        intr = static_cast(                                        obj->queryLocalInterface(                                                     IMediaPlayerService::descriptor).get());                         if (intr == NULL) {                                                       intr = new BpMediaPlayerService(obj);                                    }                                                                 }                                                                     return intr;                                                      } 

参数obj就是上面调用sm->getService(String16("media.player"))的返回,那首先来看getService:
    virtual sp getService(const String16& name) const    {        unsigned n;        for (n = 0; n < 5; n++){            sp svc = checkService(name);            if (svc != NULL) return svc;            ALOGI("Waiting for service %s...\n", String8(name).string());            sleep(1);        }        return NULL;    }    virtual sp checkService( const String16& name) const    {        Parcel data, reply;        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());        data.writeString16(name);        remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);        return reply.readStrongBinder();    }

getService方法循环5次调用checkService方法来获取MediaPlayerService,以防在客户端调用的时候,MediaPlayerService还没来得及注册。在checkService()中,首先写入向Parcel中写入strict mode和"android.os.IServiceManager",然后再写入"media.player"。最后调用remote()->transact将数据发送出去。我们前面分析过,这里的remote()就是BpBinder(0),最终会调用IPCThreadState的transact方法。在transact方法中,首先将上面的data Parcel中的数据写入到mOut中,然后调用talkWithDriver()把数据发往binder驱动,发往binder驱动的数据如下:
cmd BC_TRANSACTION
binder_transaction_data target(handle) 0
cookie 0
code CHECK_SERVICE_TRANSACTION
flags 0
sender_pid 0
sender_euid 0
data_size  
offsets_size  
buffer Strict Mode     0
interface         "android.os.IServiceManager"
name                ”media.player"
offsets 0

binder驱动先调用binder_thread_write来处理写请求,在binder_thread_write函数中,处理BC_TRANSACTION又调用binder_transaction方法:
static void binder_transaction(struct binder_proc *proc,                               struct binder_thread *thread,                               struct binder_transaction_data *tr, int reply){        struct binder_transaction *t;        struct binder_work *tcomplete;        size_t *offp, *off_end;        struct binder_proc *target_proc;        struct binder_thread *target_thread = NULL;        struct binder_node *target_node = NULL;        struct list_head *target_list;        wait_queue_head_t *target_wait;        struct binder_transaction *in_reply_to = NULL;        struct binder_transaction_log_entry *e;        uint32_t return_error;        e = binder_transaction_log_add(&binder_transaction_log);        e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);        e->from_proc = proc->pid;        e->from_thread = thread->pid;        e->target_handle = tr->target.handle;        e->data_size = tr->data_size;        e->offsets_size = tr->offsets_size;        if (reply) {        } else {                if (tr->target.handle) {                } else {                        target_node = binder_context_mgr_node;                        if (target_node == NULL) {                        }                }                e->to_node = target_node->debug_id;                target_proc = target_node->proc;                if (target_proc == NULL) {                }                if (security_binder_transaction(proc->tsk, target_proc->tsk) < 0) {                        return_error = BR_FAILED_REPLY;                        goto err_invalid_target_handle;                }                if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {                }        }        if (target_thread) {        } else {                target_list = &target_proc->todo;                target_wait = &target_proc->wait;        }        e->to_proc = target_proc->pid;        /* TODO: reuse incoming transaction for reply */        t = kzalloc(sizeof(*t), GFP_KERNEL);        if (t == NULL) {                return_error = BR_FAILED_REPLY;                goto err_alloc_t_failed;        }        binder_stats_created(BINDER_STAT_TRANSACTION);        tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);        if (tcomplete == NULL) {                return_error = BR_FAILED_REPLY;                goto err_alloc_tcomplete_failed;        }        binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);        t->debug_id = ++binder_last_id;        e->debug_id = t->debug_id;        if (!reply && !(tr->flags & TF_ONE_WAY))                t->from = thread;        else                t->from = NULL;        t->sender_euid = proc->tsk->cred->euid;        t->to_proc = target_proc;        t->to_thread = target_thread;        t->code = tr->code;        t->flags = tr->flags;        t->priority = task_nice(current);        trace_binder_transaction(reply, t, target_node);        t->buffer = binder_alloc_buf(target_proc, tr->data_size,                tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));        if (t->buffer == NULL) {                return_error = BR_FAILED_REPLY;                goto err_binder_alloc_buf_failed;        }        t->buffer->allow_user_free = 0;        t->buffer->debug_id = t->debug_id;        t->buffer->transaction = t;        t->buffer->target_node = target_node;        trace_binder_transaction_alloc_buf(t->buffer);        if (target_node)                binder_inc_node(target_node, 1, 0, NULL);        offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));        if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {                binder_user_error("binder: %d:%d got transaction with invalid "                        "data ptr\n", proc->pid, thread->pid);                return_error = BR_FAILED_REPLY;                goto err_copy_data_failed;        }        if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {                binder_user_error("binder: %d:%d got transaction with invalid "                        "offsets ptr\n", proc->pid, thread->pid);                return_error = BR_FAILED_REPLY;                goto err_copy_data_failed;        }        if (!IS_ALIGNED(tr->offsets_size, sizeof(size_t))) {                binder_user_error("binder: %d:%d got transaction with "                        "invalid offsets size, %zd\n",                        proc->pid, thread->pid, tr->offsets_size);                return_error = BR_FAILED_REPLY;                goto err_bad_offset;        }        off_end = (void *)offp + tr->offsets_size;        if (reply) {        } else if (!(t->flags & TF_ONE_WAY)) {                BUG_ON(t->buffer->async_transaction != 0);                t->need_reply = 1;                t->from_parent = thread->transaction_stack;                thread->transaction_stack = t;        } else {        }        t->work.type = BINDER_WORK_TRANSACTION;        list_add_tail(&t->work.entry, target_list);        tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;        list_add_tail(&tcomplete->entry, &thread->todo);        if (target_wait)                wake_up_interruptible(target_wait);        return;

这里的target_node、target_proc、target_list和target_wait都是ServiceManager所在的进程。然后新建一个binder_transaction的数据结构t,并设置t的to_proc、to_thread为ServiceManager,t->code等于CHECK_SERVICE_TRANSACTION。然后申请binder_buffer,并把参数中tr中的buffer指针所指向的数据拷贝到binder_buffer中。然后设置t的from_parent为当前thread的transaction_stack,并将t加入到当前thread的transaction_stack中。最后将t加入到ServiceManager所在thread的todo链表中,并向当前thread的todo链表中加入tcomplete,我们前面分析过处理tcomplete的流程,就是向用户空间拷贝BR_NOOP和BR_TRANSACTION_COMPLETE两个指令。接着来看ServiceManager处理CHECK_SERVICE_TRANSACTION的流程,还是首先调用binder_parse去解析命令,然后调用svcmgr_handler去处理CHECK_SERVICE_TRANSACTION命令:
int svcmgr_handler(struct binder_state *bs,                   struct binder_txn *txn,                   struct binder_io *msg,                   struct binder_io *reply){    struct svcinfo *si;    uint16_t *s;    unsigned len;    void *ptr;    uint32_t strict_policy;    int allow_isolated;    if (txn->target != svcmgr_handle)        return -1;    strict_policy = bio_get_uint32(msg);    s = bio_get_string16(msg, &len);    if ((len != (sizeof(svcmgr_id) / 2)) ||        memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {        fprintf(stderr,"invalid id %s\n", str8(s));        return -1;    }    switch(txn->code) {    case SVC_MGR_GET_SERVICE:    case SVC_MGR_CHECK_SERVICE:        s = bio_get_string16(msg, &len);        ptr = do_find_service(bs, s, len, txn->sender_euid);        if (!ptr)            break;        bio_put_ref(reply, ptr);        return 0;

这里首先还是做RPC检查,然后调用do_find_service去查找相应的svcinfo:
void *do_find_service(struct binder_state *bs, uint16_t *s, unsigned len, unsigned uid){    struct svcinfo *si;    si = find_svc(s, len);    if (si && si->ptr) {        if (!si->allow_isolated) {            unsigned appid = uid % AID_USER;            if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {                return 0;            }        }        return si->ptr;    } else {        return 0;    }}

首先通过find_svc找到media.player的svcinfo,然后返回它的ptr指针。通过注册Service的知识,我们知道,这里的ptr指针实际上就是指向前面注册Service的handle id值。然后调用bio_put_ref(reply, ptr)将获得的ptr写入到reply中。
void bio_put_ref(struct binder_io *bio, void *ptr){    struct binder_object *obj;    if (ptr)        obj = bio_alloc_obj(bio);    else        obj = bio_alloc(bio, sizeof(*obj));    if (!obj)        return;    obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;    obj->type = BINDER_TYPE_HANDLE;    obj->pointer = ptr;    obj->cookie = 0;}

回到binder_parse中,它接着调用binder_send_reply将获取到的结果返回给binder驱动,binder_send_reply会发送BC_FREE_BUFFER和BC_REPLY两个指令给binder驱动,前面我们已经分析过BC_FREE_BUFFER和BC_REPLY了,这里与前面不同的是,在处理BC_REPLY是,t->buffer->data中储存着上面填充的binder_object对象:
static void binder_transaction(struct binder_proc *proc,                               struct binder_thread *thread,                               struct binder_transaction_data *tr, int reply){        struct binder_transaction *t;        struct binder_work *tcomplete;        size_t *offp, *off_end;        struct binder_proc *target_proc;        struct binder_thread *target_thread = NULL;        struct binder_node *target_node = NULL;        struct list_head *target_list;        wait_queue_head_t *target_wait;        struct binder_transaction *in_reply_to = NULL;        struct binder_transaction_log_entry *e;        uint32_t return_error;        e = binder_transaction_log_add(&binder_transaction_log);        e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);        e->from_proc = proc->pid;        e->from_thread = thread->pid;        e->target_handle = tr->target.handle;        e->data_size = tr->data_size;        e->offsets_size = tr->offsets_size;        if (reply) {                in_reply_to = thread->transaction_stack;                if (in_reply_to == NULL) {                }                binder_set_nice(in_reply_to->saved_priority);                if (in_reply_to->to_thread != thread) {                }                thread->transaction_stack = in_reply_to->to_parent;                target_thread = in_reply_to->from;                if (target_thread == NULL) {                        return_error = BR_DEAD_REPLY;                        goto err_dead_binder;                }                if (target_thread->transaction_stack != in_reply_to) {                }                target_proc = target_thread->proc;        } else {                       }        if (target_thread) {                e->to_thread = target_thread->pid;                target_list = &target_thread->todo;                target_wait = &target_thread->wait;        } else {        }        e->to_proc = target_proc->pid;        /* TODO: reuse incoming transaction for reply */        t = kzalloc(sizeof(*t), GFP_KERNEL);        if (t == NULL) {                return_error = BR_FAILED_REPLY;                goto err_alloc_t_failed;        }        binder_stats_created(BINDER_STAT_TRANSACTION);        tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);        if (tcomplete == NULL) {                return_error = BR_FAILED_REPLY;                goto err_alloc_tcomplete_failed;        }        binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);        t->debug_id = ++binder_last_id;        e->debug_id = t->debug_id;        if (!reply && !(tr->flags & TF_ONE_WAY))                t->from = thread;        else                t->from = NULL;        t->sender_euid = proc->tsk->cred->euid;        t->to_proc = target_proc;        t->to_thread = target_thread;        t->code = tr->code;        t->flags = tr->flags;        t->priority = task_nice(current);        trace_binder_transaction(reply, t, target_node);        t->buffer = binder_alloc_buf(target_proc, tr->data_size,                tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));        if (t->buffer == NULL) {                return_error = BR_FAILED_REPLY;                goto err_binder_alloc_buf_failed;        }        t->buffer->allow_user_free = 0;        t->buffer->debug_id = t->debug_id;        t->buffer->transaction = t;        t->buffer->target_node = target_node;        trace_binder_transaction_alloc_buf(t->buffer);        if (target_node)                binder_inc_node(target_node, 1, 0, NULL);        offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));        if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {                binder_user_error("binder: %d:%d got transaction with invalid "                        "data ptr\n", proc->pid, thread->pid);                return_error = BR_FAILED_REPLY;                goto err_copy_data_failed;        }        if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {                binder_user_error("binder: %d:%d got transaction with invalid "                        "offsets ptr\n", proc->pid, thread->pid);                return_error = BR_FAILED_REPLY;                goto err_copy_data_failed;        }        off_end = (void *)offp + tr->offsets_size;        for (; offp < off_end; offp++) {                struct flat_binder_object *fp;                if (*offp > t->buffer->data_size - sizeof(*fp) ||                }                fp = (struct flat_binder_object *)(t->buffer->data + *offp);                switch (fp->type) {                case BINDER_TYPE_HANDLE:                case BINDER_TYPE_WEAK_HANDLE: {                        struct binder_ref *ref = binder_get_ref(proc, fp->handle);                        if (ref == NULL) {                                binder_user_error("binder: %d:%d got "                                        "transaction with invalid "                                        "handle, %ld\n", proc->pid,                                        thread->pid, fp->handle);                                return_error = BR_FAILED_REPLY;                                goto err_binder_get_ref_failed;                        }                        if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {                                return_error = BR_FAILED_REPLY;                                goto err_binder_get_ref_failed;                        }                        if (ref->node->proc == target_proc) {                                if (fp->type == BINDER_TYPE_HANDLE)                                        fp->type = BINDER_TYPE_BINDER;                                else                                        fp->type = BINDER_TYPE_WEAK_BINDER;                                fp->binder = ref->node->ptr;                                fp->cookie = ref->node->cookie;                                binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL);                                trace_binder_transaction_ref_to_node(t, ref);                                binder_debug(BINDER_DEBUG_TRANSACTION,                                             "        ref %d desc %d -> node %d u%p\n",                                             ref->debug_id, ref->desc, ref->node->debug_id,                                             ref->node->ptr);                        } else {                                struct binder_ref *new_ref;                                new_ref = binder_get_ref_for_node(target_proc, ref->node);                                if (new_ref == NULL) {                                        return_error = BR_FAILED_REPLY;                                        goto err_binder_get_ref_for_node_failed;                                }                                fp->handle = new_ref->desc;                                binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);                                trace_binder_transaction_ref_to_ref(t, ref,                                                                    new_ref);                                binder_debug(BINDER_DEBUG_TRANSACTION,                                             "        ref %d desc %d -> ref %d desc %d (node %d)\n",                                             ref->debug_id, ref->desc, new_ref->debug_id,                                             new_ref->desc, ref->node->debug_id);                        }                } break;                }        }        if (reply) {                BUG_ON(t->buffer->async_transaction != 0);                binder_pop_transaction(target_thread, in_reply_to);        } else if (!(t->flags & TF_ONE_WAY)) {        } else {        }        t->work.type = BINDER_WORK_TRANSACTION;        list_add_tail(&t->work.entry, target_list);        tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;        list_add_tail(&tcomplete->entry, &thread->todo);        if (target_wait)                wake_up_interruptible(target_wait);        return;

这里与前面处理BC_REPLY不同的是,在binder_transaction_data中的offsets_size不为0,所以要去处理buffer里面的flat_binder_object结构,其实这里的flat_binder_object结构就是上面的binder_object。这里首先通过handle id找到前面注册的binder_ref结构,然后通过binder_ref的node变量就可以找到binder_node结构,在binder_node中就存在只有我们注册的MediaplayerService对象了。这里首先判断ref->node->proc == target_proc,即注册的进程和现在获取MediaPlayerService的进程是否是同一个,如果是,则改写fp->type为BINDER_TYPE_BINDER,并设置fp->binder为binder->getWeakRefs(),fp->cookie等于binder->local本身。如果不是在同一个进程(这也是大多数case的状况),则首先调用binder_get_ref_for_node为获取MediaPlayerService的进程分配一个新的binder_ref结构,这里的binder id值可能不是之前注册的binder id值了(因为不在同一个进程,desc是往上增长的),然后设置fp->handle为新的desc值。最后返回到用户空间,在waitForResponse中,还是调用Parcel的ipcSetDataReference构造相应的数据结构,并让Parcel中的mData指针指向上面的binder_buffer。然后回到BpServiceMananger中的checkService方法,它会调用Parcel的readStrongBinder返回一个新的BpBinder。
sp Parcel::readStrongBinder() const{    sp val;    unflatten_binder(ProcessState::self(), *this, &val);    return val;}status_t unflatten_binder(const sp& proc,    const Parcel& in, sp* out){    const flat_binder_object* flat = in.readObject(false);        if (flat) {        switch (flat->type) {            case BINDER_TYPE_BINDER:                *out = static_cast(flat->cookie);                return finish_unflatten_binder(NULL, *flat, in);            case BINDER_TYPE_HANDLE:                *out = proc->getStrongProxyForHandle(flat->handle);                return finish_unflatten_binder(                    static_cast(out->get()), *flat, in);        }            }    return BAD_TYPE;}

当在不同的进程时,type等于BINDER_TYPE_HANDLE;当在同一个进程中时,type等于BINDER_TYPE_BINDER。这里是不在同一个进程的跨进程间调用,先来看ProcessState的getStrongProxyForHandle方法:
sp ProcessState::getStrongProxyForHandle(int32_t handle){    sp result;    AutoMutex _l(mLock);    handle_entry* e = lookupHandleLocked(handle);    if (e != NULL) {        IBinder* b = e->binder;        if (b == NULL || !e->refs->attemptIncWeak(this)) {            if (handle == 0) {                Parcel data;                status_t status = IPCThreadState::self()->transact(                        0, IBinder::PING_TRANSACTION, data, NULL, 0);                if (status == DEAD_OBJECT)                   return NULL;            }            b = new BpBinder(handle);             e->binder = b;            if (b) e->refs = b->getWeakRefs();            result = b;        } else {            result.force_set(b);            e->refs->decWeak(this);        }    }    return result;}

这里通过handle id值去mHandleToObject数组中查找,如果前面有调用过MediaPlayerService的方法,就会存在这样一个handle_entry;若是第一次调用,这里会构造一个新的handle_entry,并将其binder设置为NULL,然后返回。这里的handle id肯定是大于0,所以构造一个BpBinder(handle)对象并返回。 回到最开始的IMediaPlayerService的asInterface方法:
android::sp IMediaPlayerService::asInterface(                          const android::sp& obj)                     {                                                                         android::sp intr;                                       if (obj != NULL) {                                                        intr = static_cast(                                        obj->queryLocalInterface(                                                     IMediaPlayerService::descriptor).get());                         if (intr == NULL) {                                                       intr = new BpMediaPlayerService(obj);                                    }                                                                 }                                                                     return intr;                                                      } 

这里的obj就是BpBinder(handle),显然它的queryLocalInterface方法返回NULL,所以这里构造一个BpMediaPlayerService(BpBinder(handle))并返回。所以在nativeListen函数中的service就等于BpMediaPlayerService(BpBinder(handle))。

调用Native Service的方法

当前面调用service->listenForRemoteDisplay方法时,相当于调用的是BpMediaPlayerService(BpBinder(handle))->listenForRemoteDisplay(),我们先来看BpMediaPlayerService的listenForRemoteDisplay方法:
    virtual sp listenForRemoteDisplay(const sp& client,            const String8& iface)    {        Parcel data, reply;        data.writeInterfaceToken(IMediaPlayerService::getInterfaceDescriptor());        data.writeStrongBinder(client->asBinder());        data.writeString8(iface);        remote()->transact(LISTEN_FOR_REMOTE_DISPLAY, data, &reply);        return interface_cast(reply.readStrongBinder());    }

这里首先向Parcel中写入strict mode和"android.media.IMediaPlayerService"字串,然后将两个参数也写入到data中。调用remote()->transact方法其实就是调用BpBinder(handle)->transac方法:
status_t BpBinder::transact(    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){    // Once a binder has died, it will never come back to life.    if (mAlive) {        status_t status = IPCThreadState::self()->transact(            mHandle, code, data, reply, flags);        if (status == DEAD_OBJECT) mAlive = 0;        return status;    }    return DEAD_OBJECT;}

这里的mHandle就是上面的handle id(大于0),code就是LISTEN_FOR_REMOTE_DISPLAY。然后调用IPCThreadState的transact方法,这里的流程就不分析了,基本与前面注册AudioFlinger的流程差不多,最终会调用到binder_transaction方法:
static void binder_transaction(struct binder_proc *proc,                               struct binder_thread *thread,                               struct binder_transaction_data *tr, int reply){        struct binder_transaction *t;        struct binder_work *tcomplete;        size_t *offp, *off_end;        struct binder_proc *target_proc;        struct binder_thread *target_thread = NULL;        struct binder_node *target_node = NULL;        struct list_head *target_list;        wait_queue_head_t *target_wait;        struct binder_transaction *in_reply_to = NULL;        struct binder_transaction_log_entry *e;        uint32_t return_error;        e = binder_transaction_log_add(&binder_transaction_log);        e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);        e->from_proc = proc->pid;        e->from_thread = thread->pid;        e->target_handle = tr->target.handle;        e->data_size = tr->data_size;        e->offsets_size = tr->offsets_size;        if (reply) {        } else {                if (tr->target.handle) {                        struct binder_ref *ref;                        ref = binder_get_ref(proc, tr->target.handle);                        if (ref == NULL) {                                binder_user_error("binder: %d:%d got "                                        "transaction to invalid handle\n",                                        proc->pid, thread->pid);                                return_error = BR_FAILED_REPLY;                                goto err_invalid_target_handle;                        }                        target_node = ref->node;                } else {                }                e->to_node = target_node->debug_id;                target_proc = target_node->proc;                if (target_proc == NULL) {                }                if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {                }        }        if (target_thread) {        } else {                target_list = &target_proc->todo;                target_wait = &target_proc->wait;        }

这里的代码大致与注册Sevice的流程差不多,只是tr->target.handle的不同,当注册Service时,这里的handle等于0;当调用MediaPlayerService时,这是的handle值大于0。所以上面的代码先调用binder_get_ref通过handle id值找到对应的binder_ref结构,然后设置target_node = ref->node。接下来的代码与注册service的一样,首先为NativeRemoteDisplayClient申请一个binder_node和binder_ref结构,并将fp->type 改为 BINDER_TYPE_HANDLE,fp->handle改为 ref->desc。最后往MediaPlayerService的todo链表中插入一个binder_transaction结构,当MediaPlayerService所在的线程通过binder_thread_read读出待处理的任务时,它的数据结构如下:
cmd BR_TRANSACTION
binder_transaction_data target(handle) handel id(大于0)
cookie 0
code LISTEN_FOR_REMOTE_DISPLAY
flags 0
sender_pid 调用listenForRemoteDisplay 的线程pid
sender_euid 0
data_size  
offsets_size  
buffer Strict Mode     0
interface         "android.os.IServiceManager"
flat_binder_object     type     BINDER_TYPE_HANDEL
                                           flags      0
                                           binder   ref->desc
                                           cookie   local
iface                                 ip:port
offsets 0

IPCThreadState通过talkWithDriver方法获取上面的数据,并且调用executeCommand中去处理BR_TRANSACTION命令:
    case BR_TRANSACTION:        {            binder_transaction_data tr;            result = mIn.read(&tr, sizeof(tr));                        Parcel buffer;            buffer.ipcSetDataReference(                reinterpret_cast(tr.data.ptr.buffer),                tr.data_size,                reinterpret_cast(tr.data.ptr.offsets),                tr.offsets_size/sizeof(size_t), freeBuffer, this);                        const pid_t origPid = mCallingPid;            const uid_t origUid = mCallingUid;                        mCallingPid = tr.sender_pid;            mCallingUid = tr.sender_euid;                        int curPrio = getpriority(PRIO_PROCESS, mMyThreadId);            if (gDisableBackgroundScheduling) {                if (curPrio > ANDROID_PRIORITY_NORMAL) {                    setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL);                }            } else {                if (curPrio >= ANDROID_PRIORITY_BACKGROUND) {                    set_sched_policy(mMyThreadId, SP_BACKGROUND);                }            }                        Parcel reply;            if (tr.target.ptr) {                sp b((BBinder*)tr.cookie);                const status_t error = b->transact(tr.code, buffer, &reply, tr.flags);                if (error < NO_ERROR) reply.setError(error);            } else {                const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);                if (error < NO_ERROR) reply.setError(error);            }                        //ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n",            //     mCallingPid, origPid, origUid);                        if ((tr.flags & TF_ONE_WAY) == 0) {                LOG_ONEWAY("Sending reply to %d!", mCallingPid);                sendReply(reply, 0);            } else {                LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);            }                        mCallingPid = origPid;            mCallingUid = origUid;                    }        break;

这里首先通过binder_transaction_data的buffer构造一个Parcel对象,然后保存mCallingPid和mCallingUid,并设置它们为新的sender_pid和sender_euid。然后获取在tr.cookie保存的MediaPlayerService对象,然后调用它的transact方法,这里的transact实现在BBinder中:
status_t BBinder::transact(    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){    data.setDataPosition(0);    status_t err = NO_ERROR;    switch (code) {        case PING_TRANSACTION:            reply->writeInt32(pingBinder());            break;        default:            err = onTransact(code, data, reply, flags);            break;    }    if (reply != NULL) {        reply->setDataPosition(0);    }    return err;}

onTransact的实现在BnMediaPlayerService中:
status_t BnMediaPlayerService::onTransact(    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){    switch (code) {        case LISTEN_FOR_REMOTE_DISPLAY: {            CHECK_INTERFACE(IMediaPlayerService, data, reply);            sp client(                    interface_cast(data.readStrongBinder()));            String8 iface(data.readString8());            sp display(listenForRemoteDisplay(client, iface));            reply->writeStrongBinder(display->asBinder());            return NO_ERROR;        } break;

通过data中读出记录的RemoteDisplayClient的handle id值构造一个BpRemoteDisplayClient对象,再读出iface值。然后调用MediaPlayerService的listenForRemoteDisplay方法:
sp MediaPlayerService::listenForRemoteDisplay(        const sp& client, const String8& iface) {    if (!checkPermission("android.permission.CONTROL_WIFI_DISPLAY")) {        return NULL;    }    return new RemoteDisplay(client, iface.string());}

将上面创建的RemoteDisplay对象写入到reply这个Parcel中,回到executeCommand中调用sendReply方法将RemoteDisplay对象返回到调用的进程中:
status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags){    status_t err;    status_t statusBuffer;    err = writeTransactionData(BC_REPLY, flags, -1, 0, reply, &statusBuffer);    if (err < NO_ERROR) return err;        return waitForResponse(NULL, NULL);}

这里调用writeTransactionData将reply中的数据写入到mOut中,然后调用waitForResponse将数据发送到binder驱动中。这里的两个参数都是NULL,表示不需要等待BR_REPLY,收到BR_TRANSACTION_COMPLETE后就直接退出。在binder驱动收到上面的数据后,和注册Service类似,首先创建一个binder_transaction的结构,然后将target_thread和target_proc都设置为调用listenForRemoteDisplay方法的线程,然后创建binder_buffer,将上面的RemoteDisplay对象拷贝到binder_buffer中,并改写fp->type 改为 BINDER_TYPE_HANDLE,fp->handle 改为ref->desc。最后将这个binder_transaction放在调用listenForRemoteDisplay线程的todo链表中。当调用listenForRemoteDisplay线程通过waitForResponse获取到BR_REPLY后,通过binder_transaction_data中的buffer数据构造一个reply(Pacel)对象,然后调用interface_cast(reply.readStrongBinder()),就会返回一个BpRemoteDisplay(BpBinder(handle id))的对象。这里的RemoteDisplay又是一个binder调用,后面调用它的方法时就像我们调用MediaPlayerService的方法一样,需要进过binder驱动去实现跨进程的调用。

更多相关文章

  1. [Android] Otto源码简析
  2. android中TextView中文字体粗体的方法
  3. android为ImageView使用蒙层
  4. Android高手进阶教程(九)之----Android(安卓)Handler的使用!!!
  5. Android处理延时加载的方法
  6. Android(安卓)savedInstanceState的作用和用法
  7. android捕鱼达人修改方法(反编译、修改、打包)
  8. Android(安卓)Log原理分析
  9. Android(安卓)livedata 源码解剖

随机推荐

  1. ANDROID版本号和版本名称的重要性介绍
  2. Android(安卓)UI设计的基础
  3. android NDK JNI
  4. 如何添加Android返回键的退出功能
  5. 实现在一个界面里多个TextView的跑马灯效
  6. android 布局文件中控件ID、name标签属性
  7. Android实现侧拉DrawerLayout简单用法
  8. Android中使用gradient的一条建议
  9. 访问Android硬件资源の管理网络和Wifi连
  10. Android第三十三期 - Dialog的应用