一、在android中开发人员可以做那些工作?
应用程序开发:利用android提供的强大的sdk,开发出各种各样新颖的应用。
系统开发:在android中Google实现了与硬件无关的所有代码,但是与硬件密切相关的硬件抽象层却没有也无法提供,对于移动设备不同的设备提供商 底层硬件是千变万化的,不可能提供统一的硬件驱动以及接口实现,只能提供标准的接口,因此硬件提供商需要自个儿开发设备驱动,
并去实现android框架提供的接口。
二、android框架中Camera系统源码分析
在每个android手机中都有一个Camera应用程序用来实现拍照功能,不同硬件提供商可能会对这个应用程序进行改变来适合自己的UI风格,
这里仅仅分析android原生Camera应用以及框架(Android 4.0)
原生Camera应用代码在Camera.java(android4.0\packages\apps\camera\src\com\android\camera),这个应该算是Camera系统最上层,应用层的实现。
下面是Camera类部分代码

public class Camera extends ActivityBase implements FocusManager.Listener,        View.OnTouchListener, ShutterButton.OnShutterButtonListener,        SurfaceHolder.Callback, ModePicker.OnModeChangeListener,        FaceDetectionListener, CameraPreference.OnPreferenceChangedListener,        LocationManager.Listener, ShutterButton.OnShutterButtonLongPressListener
从上面可以看出,Camera在继承了很多监听接口,用来监听各种事件(对焦事件、用户触摸事件等)。这个应用时继承ActivityBase,
可以重载OnCreate、OnResume等接口,在这些接口中完成相关初始化的工作,基本就是初始化各种监听对象,以及获取相机参数等相关。
比较关键的在doOnResume这个函数中:

@Override    protected void doOnResume() {        if (mOpenCameraFail || mCameraDisabled) return;        mPausing = false;        mJpegPictureCallbackTime = 0;        mZoomValue = 0;        // Start the preview if it is not started.        if (mCameraState == PREVIEW_STOPPED) {            try {                mCameraDevice = Util.openCamera(this, mCameraId);                initializeCapabilities();                resetExposureCompensation();                startPreview();                if (mFirstTimeInitialized) startFaceDetection();            } catch (CameraHardwareException e) {                Util.showErrorAndFinish(this, R.string.cannot_connect_camera);                return;            } catch (CameraDisabledException e) {                Util.showErrorAndFinish(this, R.string.camera_disabled);                return;            }        }        if (mSurfaceHolder != null) {            // If first time initialization is not finished, put it in the            // message queue.            if (!mFirstTimeInitialized) {                mHandler.sendEmptyMessage(FIRST_TIME_INIT);            } else {                initializeSecondTime();            }        }        keepScreenOnAwhile();        if (mCameraState == IDLE) {            mOnResumeTime = SystemClock.uptimeMillis();            mHandler.sendEmptyMessageDelayed(CHECK_DISPLAY_ROTATION, 100);        }    }
在这个函数中看到通过这个函数获得Camera底层对象
mCameraDevice = Util.openCamera(this, mCameraId),这里使用Util这个类,这个类的实现在
Util.java (android4.0\packages\apps\camera\src\com\android\camera)中,找到OpenCamera这个函数实现:

public static android.hardware.Camera openCamera(Activity activity, int cameraId)            throws CameraHardwareException, CameraDisabledException {        // Check if device policy has disabled the camera.        DevicePolicyManager dpm = (DevicePolicyManager) activity.getSystemService(                Context.DEVICE_POLICY_SERVICE);        if (dpm.getCameraDisabled(null)) {            throw new CameraDisabledException();        }        try {            return CameraHolder.instance().open(cameraId);        } catch (CameraHardwareException e) {            // In eng build, we throw the exception so that test tool            // can detect it and report it            if ("eng".equals(Build.TYPE)) {                throw new RuntimeException("openCamera failed", e);            } else {                throw e;            }        }    }
从这个函数可以看出,android系统中对下层Camera管理,是通过一个单例模式CameraHolder来管理的,
定位到这个类的实现CameraHolder.java (android4.0\packages\apps\camera\src\com\android\camera)通过调用open函数获取一个Camera硬件设备对象,
因为Camera设备是独享设备,不能同时被两个进程占用,而整个android系统是一个多进程环境,因此需要加入一些进程间互斥同步的方法。
定位到这个类的open函数:

public synchronized android.hardware.Camera open(int cameraId)            throws CameraHardwareException {        Assert(mUsers == 0);        if (mCameraDevice != null && mCameraId != cameraId) {            mCameraDevice.release();            mCameraDevice = null;            mCameraId = -1;        }        if (mCameraDevice == null) {            try {                Log.v(TAG, "open camera " + cameraId);                mCameraDevice = android.hardware.Camera.open(cameraId);                mCameraId = cameraId;            } catch (RuntimeException e) {                Log.e(TAG, "fail to connect Camera", e);                throw new CameraHardwareException(e);            }            mParameters = mCameraDevice.getParameters();        } else {            try {                mCameraDevice.reconnect();            } catch (IOException e) {                Log.e(TAG, "reconnect failed.");                throw new CameraHardwareException(e);            }            mCameraDevice.setParameters(mParameters);        }        ++mUsers;        mHandler.removeMessages(RELEASE_CAMERA);        mKeepBeforeTime = 0;        return mCameraDevice;    }
通 过android.hardware.Camera.open(cameraId)调用进入下一层封装,JNI层,这一层是java代码的最下层,对下层 CameraC++代码进行JNI封装,封装实现类在Camera.java (android4.0\frameworks\base\core\java\android\hardware) 下面是这个类的部分实现,里面定义了不少回调函数:
public class Camera {    private static final String TAG = "Camera";    // These match the enums in frameworks/base/include/camera/Camera.h    private static final int CAMERA_MSG_ERROR            = 0x001;    private static final int CAMERA_MSG_SHUTTER          = 0x002;    private static final int CAMERA_MSG_FOCUS            = 0x004;    private static final int CAMERA_MSG_ZOOM             = 0x008;    private static final int CAMERA_MSG_PREVIEW_FRAME    = 0x010;    private static final int CAMERA_MSG_VIDEO_FRAME      = 0x020;    private static final int CAMERA_MSG_POSTVIEW_FRAME   = 0x040;    private static final int CAMERA_MSG_RAW_IMAGE        = 0x080;    private static final int CAMERA_MSG_COMPRESSED_IMAGE = 0x100;    private static final int CAMERA_MSG_RAW_IMAGE_NOTIFY = 0x200;    private static final int CAMERA_MSG_PREVIEW_METADATA = 0x400;    private static final int CAMERA_MSG_ALL_MSGS         = 0x4FF;    private int mNativeContext; // accessed by native methods    private EventHandler mEventHandler;    private ShutterCallback mShutterCallback;    private PictureCallback mRawImageCallback;    private PictureCallback mJpegCallback;    private PreviewCallback mPreviewCallback;    private PictureCallback mPostviewCallback;    private AutoFocusCallback mAutoFocusCallback;    private OnZoomChangeListener mZoomListener;    private FaceDetectionListener mFaceListener;    private ErrorCallback mErrorCallback;
定位到Open函数:
public static Camera open(int cameraId) {
return new Camera(cameraId);
}
Open函数是一个静态方法,构造一个Camera对象:

Camera(int cameraId) {        mShutterCallback = null;        mRawImageCallback = null;        mJpegCallback = null;        mPreviewCallback = null;        mPostviewCallback = null;        mZoomListener = null;        Looper looper;        if ((looper = Looper.myLooper()) != null) {            mEventHandler = new EventHandler(this, looper);        } else if ((looper = Looper.getMainLooper()) != null) {            mEventHandler = new EventHandler(this, looper);        } else {            mEventHandler = null;        }        native_setup(new WeakReference<Camera>(this), cameraId);    }

在构造函数中调用native_setup方法,此方法对应于C++代码的android_hardware_Camera_native_setup方法,
实现在android_hardware_Camera.cpp (android4.0\frameworks\base\core\jni),具体代码如下:

static void android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,    jobject weak_this, jint cameraId){    sp<Camera> camera = Camera::connect(cameraId);    if (camera == NULL) {        jniThrowRuntimeException(env, "Fail to connect to camera service");        return;    }    // make sure camera hardware is alive    if (camera->getStatus() != NO_ERROR) {        jniThrowRuntimeException(env, "Camera initialization failed");        return;    }    jclass clazz = env->GetObjectClass(thiz);    if (clazz == NULL) {        jniThrowRuntimeException(env, "Can't find android/hardware/Camera");        return;    }    // We use a weak reference so the Camera object can be garbage collected.    // The reference is only used as a proxy for callbacks.    sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);    context->incStrong(thiz);    camera->setListener(context);    // save context in opaque field    env->SetIntField(thiz, fields.context, (int)context.get());}
在android_hardware_Camera_native_setup方法中调用了Camera对象的connect方法,这个Camera类的声明在Camera.h (android4.0\frameworks\base\include\camera)
定位到connect方法:
sp<Camera> Camera::connect(int cameraId){    LOGV("connect");    sp<Camera> c = new Camera();    const sp<ICameraService>& cs = getCameraService();    if (cs != 0) {        c->mCamera = cs->connect(c, cameraId);    }    if (c->mCamera != 0) {        c->mCamera->asBinder()->linkToDeath(c);        c->mStatus = NO_ERROR;    } else {        c.clear();    }    return c;}
这里以下的代码就比较关键了,涉及到Camera框架的实现机制,Camera系统使用的是Server-Client机制,Service和Client位于不同的进程中,进程间使用Binder机制进行通信,
Service端实际实现相机相关的操作,Client端通过Binder接口调用Service对应的操作。
继续分析代码,上面函数调用getCameraService方法,获得CameraService的引用,ICameraService有两个子类,BnCameraService和BpCameraService,这两个子类同时也
继承了IBinder接口,这两个子类分别实现了Binder通信的两端,Bnxxx实现ICameraService的具体功能,Bpxxx利用Binder的通信功能封装ICameraService方法,具体如下:

class ICameraService : public IInterface{public:    enum {        GET_NUMBER_OF_CAMERAS = IBinder::FIRST_CALL_TRANSACTION,        GET_CAMERA_INFO,        CONNECT    };public:    DECLARE_META_INTERFACE(CameraService);    virtual int32_t         getNumberOfCameras() = 0;    virtual status_t        getCameraInfo(int cameraId,                                          struct CameraInfo* cameraInfo) = 0;    virtual sp<ICamera>     connect(const sp<ICameraClient>& cameraClient,                                    int cameraId) = 0;};// ----------------------------------------------------------------------------class BnCameraService: public BnInterface<ICameraService>{public:    virtual status_t    onTransact( uint32_t code,                                    const Parcel& data,                                    Parcel* reply,                                    uint32_t flags = 0);};}; // na

class BpCameraService: public BpInterface<ICameraService>{public:    BpCameraService(const sp<IBinder>& impl)        : BpInterface<ICameraService>(impl)    {    }    // get number of cameras available    virtual int32_t getNumberOfCameras()    {        Parcel data, reply;        data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());        remote()->transact(BnCameraService::GET_NUMBER_OF_CAMERAS, data, &reply);        return reply.readInt32();    }    // get information about a camera    virtual status_t getCameraInfo(int cameraId,                                   struct CameraInfo* cameraInfo) {        Parcel data, reply;        data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());        data.writeInt32(cameraId);        remote()->transact(BnCameraService::GET_CAMERA_INFO, data, &reply);        cameraInfo->facing = reply.readInt32();        cameraInfo->orientation = reply.readInt32();        return reply.readInt32();    }    // connect to camera service    virtual sp<ICamera> connect(const sp<ICameraClient>& cameraClient, int cameraId)    {        Parcel data, reply;        data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());        data.writeStrongBinder(cameraClient->asBinder());        data.writeInt32(cameraId);        remote()->transact(BnCameraService::CONNECT, data, &reply);        return interface_cast<ICamera>(reply.readStrongBinder());    }};IMPLEMENT_META_INTERFACE(CameraService, "android.hardware.ICameraService");// ----------------------------------------------------------------------status_t BnCameraService::onTransact(    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){    switch(code) {        case GET_NUMBER_OF_CAMERAS: {            CHECK_INTERFACE(ICameraService, data, reply);            reply->writeInt32(getNumberOfCameras());            return NO_ERROR;        } break;        case GET_CAMERA_INFO: {            CHECK_INTERFACE(ICameraService, data, reply);            CameraInfo cameraInfo;            memset(&cameraInfo, 0, sizeof(cameraInfo));            status_t result = getCameraInfo(data.readInt32(), &cameraInfo);            reply->writeInt32(cameraInfo.facing);            reply->writeInt32(cameraInfo.orientation);            reply->writeInt32(result);            return NO_ERROR;        } break;        case CONNECT: {            CHECK_INTERFACE(ICameraService, data, reply);            sp<ICameraClient> cameraClient = interface_cast<ICameraClient>(data.readStrongBinder());            sp<ICamera> camera = connect(cameraClient, data.readInt32());            reply->writeStrongBinder(camera->asBinder());            return NO_ERROR;        } break;        default:            return BBinder::onTransact(code, data, reply, flags);    }}// ----------------------------------------------------------------------------}; // namespace android
下面继续分析sp<Camera> Camera::connect(int cameraId)这个方法,,定位到getCameraService这个方法

const sp<ICameraService>& Camera::getCameraService(){    Mutex::Autolock _l(mLock);    if (mCameraService.get() == 0) {        sp<IServiceManager> sm = defaultServiceManager();        sp<IBinder> binder;        do {            binder = sm->getService(String16("media.camera"));            if (binder != 0)                break;            LOGW("CameraService not published, waiting...");            usleep(500000); // 0.5 s        } while(true);        if (mDeathNotifier == NULL) {            mDeathNotifier = new DeathNotifier();        }        binder->linkToDeath(mDeathNotifier);        mCameraService = interface_cast<ICameraService>(binder);    }    LOGE_IF(mCameraService==0, "no CameraService!?");    return mCameraService;}
定位到mCameraService = interface_cast<ICameraService>(binder); mCameraService是一个ICamerService类型,更加具体具体一点来讲应该是BpCameraService,
因为在这个类中实现了ICameraService的方法。

总结上面Binder机制,仅仅考虑分析Binder用法,对底层实现不进行深究,基本步骤如下:
1.定义进程间通信的接口比如这里的ICameraService;
2.在BnCameraService和BpCamaraService实现这个接口,这两个接口也分别继承于BnInterface和BpInterface;
3.服务端向ServiceManager注册Binder,客户端向ServiceManager获得Binder;
4.然后就可以实现双向进程间通信了;

通过getCameraService得到ICameraService引用后,调用ICameraService的connect方法获得ICamera引用,


c->mCamera=cs->connect(c,cameraId);


进一步跟进connect方法,这里就是BpCameraService类中connect方法的具体实现。


virtual sp<ICamera> connect(const sp<ICameraClient>& cameraClient, int cameraId)  {      Parcel data, reply;    data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());              data.writeStrongBinder(cameraClient->asBinder());                 data.writeInt32(cameraId);      remote()->transact(BnCameraService::CONNECT, data, &reply);      return interface_cast<ICamera>(reply.readStrongBinder());  }
在这里返回的ICamera对象,实际上应该是BpCamera对象,这里使用的是匿名Binder,前面获取CameraService的 使用的有名Binder,有名Binder需要借助于ServiceManager获取Binder,而匿名Binder可以通过已经建立后的通信通道 (有名Binder)获得。以上是实现Camera框架部分,具体的实现Camera相关的方法是在ICamera相关的接口,下面是给接口的定义:


class ICamera: public IInterface  {  public:      DECLARE_META_INTERFACE(Camera);        virtual void            disconnect() = 0;        // connect new client with existing camera remote      virtual status_t        connect(const sp<ICameraClient>& client) = 0;        // prevent other processes from using this ICamera interface      virtual status_t        lock() = 0;        // allow other processes to use this ICamera interface      virtual status_t        unlock() = 0;        // pass the buffered Surface to the camera service      virtual status_t        setPreviewDisplay(const sp<Surface>& surface) = 0;        // pass the buffered ISurfaceTexture to the camera service      virtual status_t        setPreviewTexture(              const sp<ISurfaceTexture>& surfaceTexture) = 0;        // set the preview callback flag to affect how the received frames from      // preview are handled.      virtual void            setPreviewCallbackFlag(int flag) = 0;        // start preview mode, must call setPreviewDisplay first      virtual status_t        startPreview() = 0;        // stop preview mode      virtual void            stopPreview() = 0;        // get preview state      virtual bool            previewEnabled() = 0;        // start recording mode      virtual status_t        startRecording() = 0;        // stop recording mode      virtual void            stopRecording() = 0;        // get recording state      virtual bool            recordingEnabled() = 0;        // release a recording frame      virtual void            releaseRecordingFrame(const sp<IMemory>& mem) = 0;         // auto focus      virtual status_t        autoFocus() = 0;         // cancel auto focus      virtual status_t        cancelAutoFocus() = 0;      /*      * take a picture.            * @param msgType the message type an application selectively turn on/off      * on a photo-by-photo basis. The supported message types are:      * CAMERA_MSG_SHUTTER, CAMERA_MSG_RAW_IMAGE, CAMERA_MSG_COMPRESSED_IMAGE,      * and CAMERA_MSG_POSTVIEW_FRAME. Any other message types will be ignored.      */      virtual status_t        takePicture(int msgType) = 0;        // set preview/capture parameters - key/value pairs      virtual status_t        setParameters(const String8& params) = 0;         // get preview/capture parameters - key/value pairs      virtual String8         getParameters() const = 0;        // send command to camera driver      virtual status_t        sendCommand(int32_t cmd, int32_t arg1, int32_t arg2) = 0;        // tell the camera hal to store meta data or real YUV data in video buffers.      virtual status_t        storeMetaDataInBuffers(bool enabled) = 0;  };
ICamera接口有两个子类BnCamera和BpCamera,是Binder通信的两端,BpCamera提供客户端调用 接口,BnCamera封装具体的实现,BnCamera也并没有真正实现ICamera相关接口而是在BnCamera子类 CameraService::Client中进行实现。而在CameraService::Client类中会继续调用硬件抽象层中相关方法来具体实现 Camera功能

更多相关文章

  1. [Android Studio系列(五)] Android Studio手动配置Gradle的方法
  2. 【Android 界面效果29】研究一下Android滑屏的功能的原理,及scrol
  3. Android中View的绘制过程 onMeasure方法简述 附有自定义View例子
  4. 【Android】利用Fiddler进行抓包详解教程。抓取接口以及数据,可以
  5. Android 照相机打开方法
  6. AIDL --- Android中的远程接口
  7. AndEngine添加多个动画精灵的方法
  8. Android arm模拟器的速度提升方法

随机推荐

  1. android系统架构图
  2. 第三季度平板电脑销量1670万 Android份额
  3. Android 开发中 Parcel存储类型和数据容
  4. Android中adb push和adb install的使用区
  5. 我的android 第22天 - url介绍
  6. [Android] 导入外部数据库
  7. android的binder机制
  8. Android/Java每日积累[2/27-3/3]
  9. Android界面设计更easy
  10. Android中的Gradle