Android是架构分为三层:

  • 底层Linux Kernel
  • 中间层主要由C++实现 (Android 60%源码都是C++实现)
  • 应用层主要由JAVA开发的应用程序

  应用程序执行过程大致如下: JAVA应用程序产生操作(播放音乐或停止),然后通过JNI调用进入中间层执行C++代码,中间层处理后可能需要硬件产生动作的,会继续将操作传到Linux Kernel,Kernel,不需要硬件产生操作的可能在中间层做一些处理就直接返回。需要硬件产生操作的动作则需通过Kernel调用相关的驱动执行动作或一些处理。

  在这里大家需要明白一点:Android仅使用了Linux的Kernel,即便是一些常用的库例如pthread等,都是Android自已用C/C++/汇编重写实现的。

  因为在音频通路建立过程中,涉及Android IPC通信及系统服务管理,所以下面就这两点先做个简述:

  ①Android IPC通信采用的是Client/Server结构,Client客户端(AudioRecord)通过接口(IAudioRecord)调用Server服务器对象(AudioFlinger及AudioFlinger::RecordThread等)的方法,并获取执行结果。AudioRecord.cpp主要是对类AudioRecord的实现,AudioFlinger.cpp主要是对类AudioFlinger的实现。在底层音频通信中,可以将AudioRecord作为Android IPC通信的客户端,而将AudioFlinger作为服务器端。AudioRecord获取服务器端接口(mAudioRecord)后就可以像执行自已的方法一样调用服务器端方法(AudioFlinger)。

  ②Android启动时会创建一个服务管理进程。Android系统中所有的服务都必需注册添加到该进程中,可以通过sp<IServiceManager> sm=defaultServiceManager()获取管理进程接口,然后可以通过它的AddService方法将服务注册添加:sm->addService(String16("media.audio_flinger"), new AudioFlinger());只有将服务添加到管理进程中才能被其它的进程使用:

sp<IServiceManager> sm = defaultServiceManager(); sp<IBinder> binder = sm->getService(String16("media.audio_flinger"));

Android的音频系统在启动的时候会创建两个服务:一个是上面的示例AudioFlingerService,一个是AudioPolicyService,并添加到管理进程中,之后其它进程可以使用它们提供的方法。

以下简称AudioFlingerService为AudioFlinger,AudioPolicyService为AudioPolicy

核心流程:

AudioSystem:getinput(…)->aps->getinput(..)->AudioPolicyService::getInput(…)->mpPolicyManager->getInput(…)->

<AudioPolicyService>mpClientInterface->openInput(…)->AudioFlinger::openInput(…)

录音流程分析

应用层录音

  AndioRecord类的主要功能是让各种JAVA应用能够管理音频资源,以便它们通过此类能够录制平台的声音输入硬件所收集的声音。此功能的实现就是通过”pulling同步”(reading读取)AudioRecord对象的声音数据来完成的。在录音过程中,应用所需要做的就是通过read方法去及时地获取AudioRecord对象的录音数据. AudioRecord类提供的三个获取声音数据的方法分别是read(byte[], int, int), read(short[], int, int), read(ByteBuffer, int). 无论选择使用那一个方法都必须事先设定方便用户的声音数据的存储格式。

  开始录音的时候,一个AudioRecord需要初始化一个相关联的声音buffer, 这个buffer主要是用来保存新的声音数据。这个buffer的大小,我们可以在对象构造期间去指定。它表明一个AudioRecord对象还没有被读取(同步)声音数据前能录多长的音(即一次可以录制的声音容量)。声音数据从音频硬件中被读出,数据大小不超过整个录音数据的大小(可以分多次读出),即每次读取初始化buffer容量的数据。一般情况下录音实现的简单流程如下:

  1. 创建一个数据流。
  2. 构造一个AudioRecord对象,其中需要的最小录音缓存buffer大小可以通过getMinBufferSize方法得到。如果buffer容量过小,将导致对象构造的失败。
  3. 初始化一个buffer,该buffer大于等于AudioRecord对象用于写声音数据的buffer大小。
  4. 开始录音。
  5. 从AudioRecord中读取声音数据到初始化buffer,将buffer中数据导入数据流。
  6. 停止录音。
  7. 关闭数据流。

程序示例 :

// Create a DataOuputStream to write the audio data into the saved file. OutputStream os = new FileOutputStream(file); BufferedOutputStream bos = new BufferedOutputStream(os); DataOutputStream dos = new DataOutputStream(bos); // Create a new AudioRecord object to record the audio. int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding); AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,
              11025, AudioFormat.CHANNEL_IN_MONO,
              AudioFormat.ENCODING_PCM_16BIT, bufferSize);
short[] buffer = new short[bufferSize]; audioRecord.startRecording(); isRecording = true ; while (isRecording) { int bufferReadResult = audioRecord.read(buffer, 0, bufferSize); for (int i = 0; i < bufferReadResult; i++
) dos.writeShort(buffer[i]); } audioRecord.stop(); dos.close();

1. getMinBufferSize

  getMinBufferSize函数前文已做介绍,不再细说,查看源码可知函数实现中通过调用native_get_min_buff_size这个JNI函数进入framework/base/core/jni/android_media_AudioRecord.cpp函数中的android_media_AudioRecord_get_min_buff_size.

  native_get_min_buff_size函数到android_media_AudioRecord_get_min_buff_size的关联是通过android_media_AudioRecord.cpp中的函数数组来查看的:

static JNINativeMethod gMethods[] = {    // name,               signature,  funcPtr    {"native_start",         "(II)I",    (void *)android_media_AudioRecord_start},    {"native_stop",          "()V",    (void *)android_media_AudioRecord_stop},    {"native_setup",         "(Ljava/lang/Object;IIIII[I)I", (void *)android_media_AudioRecord_setup},    {"native_finalize",      "()V",    (void *)android_media_AudioRecord_finalize},    {"native_release",       "()V",    (void *)android_media_AudioRecord_release},    {"native_read_in_byte_array", "([BII)I", (void *)android_media_AudioRecord_readInByteArray},    {"native_read_in_short_array",  "([SII)I", (void *)android_media_AudioRecord_readInShortArray},    {"native_read_in_direct_buffer","(Ljava/lang/Object;I)I", (void *)android_media_AudioRecord_readInDirectBuffer},    {"native_set_marker_pos","(I)I",   (void *)android_media_AudioRecord_set_marker_pos},    {"native_get_marker_pos","()I",    (void *)android_media_AudioRecord_get_marker_pos},    {"native_set_pos_update_period", "(I)I",   (void *)android_media_AudioRecord_set_pos_update_period},    {"native_get_pos_update_period", "()I",    (void *)android_media_AudioRecord_get_pos_update_period},    {"native_get_min_buff_size", "(III)I",   (void *)android_media_AudioRecord_get_min_buff_size},};

  android_media_AudioRecord_get_min_buff_size代码如下:

// ----------------------------------------------------------------------------// returns the minimum required size for the successful creation of an AudioRecord instance.// returns 0 if the parameter combination is not supported.// return -1 if there was an error querying the buffer size.static jint android_media_AudioRecord_get_min_buff_size(JNIEnv *env,  jobject thiz,    jint sampleRateInHertz, jint nbChannels, jint audioFormat) {    ALOGV(">> android_media_AudioRecord_get_min_buff_size(%d, %d, %d)",sampleRateInHertz, nbChannels, audioFormat);    size_t frameCount = 0;
  //以地址的方式获取frameCount的值。 status_t result
= AudioRecord::getMinFrameCount(&frameCount,sampleRateInHertz, (audioFormat == ENCODING_PCM_16BIT ?AUDIO_FORMAT_PCM_16_BIT : AUDIO_FORMAT_PCM_8_BIT), audio_channel_in_mask_from_count(nbChannels)); if (result == BAD_VALUE) { return 0; } if (result != NO_ERROR) { return -1; } return frameCount * nbChannels * (audioFormat == ENCODING_PCM_16BIT ? 2 : 1);}

  根据最小的framecount计算最小的buffersize。音频中最常见的是frame这个单位,一个frame就是1个采样点的字节数*声道。为啥搞个frame出来?因为对于多//声道的话,用1个采样点的字节数表示不全,因为播放的时候肯定是多个声道的数据都要播出来//才行。所以为了方便,就说1秒钟有多少个frame,这样就能抛开声道数,把意思表示全了。getMinBufSize函数完了后,我们得到一个满足最小要求的缓冲区大小。这样用户分配缓冲区就有了依据。

2. new AudioRecord

  public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,            int bufferSizeInBytes) throws IllegalArgumentException {        mRecordingState = RECORDSTATE_STOPPED;        // remember which looper is associated with the AudioRecord instanciation
     // 获得主线程的Looper,关于Looper的介绍见其他专题。
     if ((mInitializationLooper = Looper.myLooper()) == null) {
            mInitializationLooper = Looper.getMainLooper();        }        audioParamCheck(audioSource, sampleRateInHz, channelConfig, audioFormat);        audioBuffSizeCheck(bufferSizeInBytes);        // native initialization        int[] session = new int[1];        session[0] = 0;        //TODO: update native initialization when information about hardware init failure        //      due to capture device already open is available.
     //调用native层的native_setup,把自己的WeakReference传进去
int initResult = native_setup( new WeakReference<AudioRecord>(this), mRecordSource, mSampleRate, mChannelMask, mAudioFormat, mNativeBufferSizeInBytes, session); if (initResult != SUCCESS) { loge("Error code "+initResult+" when initializing native AudioRecord object."); return; // with mState == STATE_UNINITIALIZED } mSessionId = session[0]; mState = STATE_INITIALIZED; }

  函数实现通过调用native_setup函数进入了framework/base/core/jni/android_media_AudioRecord.cpp中的android_media_AudioRecord_setup:

static int android_media_AudioRecord_setup(JNIEnv *env, jobject thiz, jobject weak_this,        jint source, jint sampleRateInHertz, jint channelMask,                // Java channel masks map directly to the native definition        jint audioFormat, jint buffSizeInBytes, jintArray jSession){    //ALOGV(">> Entering android_media_AudioRecord_setup");    //ALOGV("sampleRate=%d, audioFormat=%d, channel mask=%x, buffSizeInBytes=%d",    //     sampleRateInHertz, audioFormat, channelMask, buffSizeInBytes);    if (!audio_is_input_channel(channelMask)) {        ALOGE("Error creating AudioRecord: channel mask %#x is not valid.", channelMask);        return AUDIORECORD_ERROR_SETUP_INVALIDCHANNELMASK;    }
//popCount是统计一个整数中有多少位为1的算法 uint32_t nbChannels
= popcount(channelMask); // compare the format against the Java constants if ((audioFormat != ENCODING_PCM_16BIT) && (audioFormat != ENCODING_PCM_8BIT)) { ALOGE("Error creating AudioRecord: unsupported audio format."); return AUDIORECORD_ERROR_SETUP_INVALIDFORMAT; } int bytesPerSample = audioFormat == ENCODING_PCM_16BIT ? 2 : 1; audio_format_t format = audioFormat == ENCODING_PCM_16BIT ? AUDIO_FORMAT_PCM_16_BIT : AUDIO_FORMAT_PCM_8_BIT; if (buffSizeInBytes == 0) { ALOGE("Error creating AudioRecord: frameCount is 0."); return AUDIORECORD_ERROR_SETUP_ZEROFRAMECOUNT; } int frameSize = nbChannels * bytesPerSample; size_t frameCount = buffSizeInBytes / frameSize; if ((uint32_t(source) >= AUDIO_SOURCE_CNT) && (uint32_t(source) != AUDIO_SOURCE_HOTWORD)) { ALOGE("Error creating AudioRecord: unknown source."); return AUDIORECORD_ERROR_SETUP_INVALIDSOURCE; } jclass clazz = env->GetObjectClass(thiz); if (clazz == NULL) { ALOGE("Can't find %s when setting up callback.", kClassPathName); return AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED; } if (jSession == NULL) { ALOGE("Error creating AudioRecord: invalid session ID pointer"); return AUDIORECORD_ERROR; } jint* nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL); if (nSession == NULL) { ALOGE("Error creating AudioRecord: Error retrieving session id pointer"); return AUDIORECORD_ERROR; } int sessionId = nSession[0]; env->ReleasePrimitiveArrayCritical(jSession, nSession, 0); nSession = NULL; // create an uninitialized AudioRecord object sp<AudioRecord> lpRecorder = new AudioRecord(); // create the callback information: // this data will be passed with every AudioRecord callback audiorecord_callback_cookie *lpCallbackData = new audiorecord_callback_cookie; lpCallbackData->audioRecord_class = (jclass)env->NewGlobalRef(clazz); // we use a weak reference so the AudioRecord object can be garbage collected. lpCallbackData->audioRecord_ref = env->NewGlobalRef(weak_this); lpCallbackData->busy = false; lpRecorder->set((audio_source_t) source, sampleRateInHertz, format, // word length, PCM channelMask, frameCount, recorderCallback,// callback_t lpCallbackData,// void* user 0, // notificationFrames, true, // threadCanCallJava sessionId); if (lpRecorder->initCheck() != NO_ERROR) { ALOGE("Error creating AudioRecord instance: initialization check failed."); goto native_init_failure; } nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL); if (nSession == NULL) { ALOGE("Error creating AudioRecord: Error retrieving session id pointer"); goto native_init_failure; } // read the audio session ID back from AudioRecord in case a new session was created during set() nSession[0] = lpRecorder->getSessionId(); env->ReleasePrimitiveArrayCritical(jSession, nSession, 0); nSession = NULL; { // scope for the lock Mutex::Autolock l(sLock); sAudioRecordCallBackCookies.add(lpCallbackData); } // save our newly created C++ AudioRecord in the "nativeRecorderInJavaObj" field of the Java object
  // 把刚创建的AudioRecord对象保存在Java层,后面会通过getAudioRecord函数再获取。
  setAudioRecord(env, thiz, lpRecorder); // save our newly created callback information in the "nativeCallbackCookie" field // of the Java object (in mNativeCallbackCookie) so we can free the memory in finalize() env->SetIntField(thiz, javaAudioRecordFields.nativeCallbackCookie, (int)lpCallbackData); return AUDIORECORD_SUCCESS; // failure:native_init_failure: env->DeleteGlobalRef(lpCallbackData->audioRecord_class); env->DeleteGlobalRef(lpCallbackData->audioRecord_ref); delete lpCallbackData; env->SetIntField(thiz, javaAudioRecordFields.nativeCallbackCookie, 0); return AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED;}

比较关键的是lpRecorder->set函数,跟踪实现:

status_t AudioRecord::set(        audio_source_t inputSource,        uint32_t sampleRate,        audio_format_t format,        audio_channel_mask_t channelMask,        int frameCountInt,        callback_t cbf,        void* user,        int notificationFrames,        bool threadCanCallJava,        int sessionId,        transfer_type transferType,        audio_input_flags_t flags){    switch (transferType) {    case TRANSFER_DEFAULT:        if (cbf == NULL || threadCanCallJava) {            transferType = TRANSFER_SYNC;        } else {            transferType = TRANSFER_CALLBACK;        }        break;    case TRANSFER_CALLBACK:        if (cbf == NULL) {            ALOGE("Transfer type TRANSFER_CALLBACK but cbf == NULL");            return BAD_VALUE;        }        break;    case TRANSFER_OBTAIN:    case TRANSFER_SYNC:        break;    default:        ALOGE("Invalid transfer type %d", transferType);        return BAD_VALUE;    }    mTransfer = transferType;    // FIXME "int" here is legacy and will be replaced by size_t later    if (frameCountInt < 0) {        ALOGE("Invalid frame count %d", frameCountInt);        return BAD_VALUE;    }    size_t frameCount = frameCountInt;    ALOGV("set(): sampleRate %u, channelMask %#x, frameCount %u", sampleRate, channelMask,            frameCount);    AutoMutex lock(mLock);    if (mAudioRecord != 0) {        ALOGE("Track already in use");        return INVALID_OPERATION;    }    if (inputSource == AUDIO_SOURCE_DEFAULT) {        inputSource = AUDIO_SOURCE_MIC;    }    mInputSource = inputSource;    if (sampleRate == 0) {        ALOGE("Invalid sample rate %u", sampleRate);        return BAD_VALUE;    }    mSampleRate = sampleRate;    // these below should probably come from the audioFlinger too...    if (format == AUDIO_FORMAT_DEFAULT) {        format = AUDIO_FORMAT_PCM_16_BIT;    }    // validate parameters    if (!audio_is_valid_format(format)) {        ALOGE("Invalid format %d", format);        return BAD_VALUE;    }    // Temporary restriction: AudioFlinger currently supports 16-bit PCM only    if (format != AUDIO_FORMAT_PCM_16_BIT) {        ALOGE("Format %d is not supported", format);        return BAD_VALUE;    }    mFormat = format;    if (!audio_is_input_channel(channelMask)) {        ALOGE("Invalid channel mask %#x", channelMask);        return BAD_VALUE;    }    mChannelMask = channelMask;    uint32_t channelCount = popcount(channelMask);    mChannelCount = channelCount;    // Assumes audio_is_linear_pcm(format), else sizeof(uint8_t)    mFrameSize = channelCount * audio_bytes_per_sample(format);    // validate framecount    size_t minFrameCount = 0;    status_t status = AudioRecord::getMinFrameCount(&minFrameCount,            sampleRate, format, channelMask);    if (status != NO_ERROR) {        ALOGE("getMinFrameCount() failed; status %d", status);        return status;    }    ALOGV("AudioRecord::set() minFrameCount = %d", minFrameCount);    if (frameCount == 0) {        frameCount = minFrameCount;    } else if (frameCount < minFrameCount) {        ALOGE("frameCount %u < minFrameCount %u", frameCount, minFrameCount);        return BAD_VALUE;    }    mFrameCount = frameCount;    mNotificationFramesReq = notificationFrames;    mNotificationFramesAct = 0;    if (sessionId == 0 ) {        mSessionId = AudioSystem::newAudioSessionId();    } else {        mSessionId = sessionId;    }    ALOGV("set(): mSessionId %d", mSessionId);    mFlags = flags;    // create the IAudioRecord    status = openRecord_l(0 /*epoch*/);    if (status) {        return status;    }    if (cbf != NULL) {        mAudioRecordThread = new AudioRecordThread(*this, threadCanCallJava);        mAudioRecordThread->run("AudioRecord", ANDROID_PRIORITY_AUDIO);    }    mStatus = NO_ERROR;    // Update buffer size in case it has been limited by AudioFlinger during track creation    mFrameCount = mCblk->frameCount_;    mActive = false;    mCbf = cbf;    mRefreshRemaining = true;    mUserData = user;    // TODO: add audio hardware input latency here    mLatency = (1000*mFrameCount) / sampleRate;    mMarkerPosition = 0;    mMarkerReached = false;    mNewPosition = 0;    mUpdatePeriod = 0;    AudioSystem::acquireAudioSessionId(mSessionId);    mSequence = 1;    mObservedSequence = mSequence;    mInOverrun = false;    return NO_ERROR;}

openRecord_l跟踪:

// must be called with mLock heldstatus_t AudioRecord::openRecord_l(size_t epoch){    status_t status;    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();     if (audioFlinger == 0) {        ALOGE("Could not get audioflinger");        return NO_INIT;    }    IAudioFlinger::track_flags_t trackFlags = IAudioFlinger::TRACK_DEFAULT;    pid_t tid = -1;    // Client can only express a preference for FAST.  Server will perform additional tests.    // The only supported use case for FAST is callback transfer mode.    if (mFlags & AUDIO_INPUT_FLAG_FAST) {        if ((mTransfer != TRANSFER_CALLBACK) || (mAudioRecordThread == 0)) {            ALOGW("AUDIO_INPUT_FLAG_FAST denied by client");            // once denied, do not request again if IAudioRecord is re-created            mFlags = (audio_input_flags_t) (mFlags & ~AUDIO_INPUT_FLAG_FAST);        } else {            trackFlags |= IAudioFlinger::TRACK_FAST;            tid = mAudioRecordThread->getTid();        }    }    mNotificationFramesAct = mNotificationFramesReq;    if (!(mFlags & AUDIO_INPUT_FLAG_FAST)) {        // Make sure that application is notified with sufficient margin before overrun        if (mNotificationFramesAct == 0 || mNotificationFramesAct > mFrameCount/2) {            mNotificationFramesAct = mFrameCount/2;        }    }    audio_io_handle_t input = AudioSystem::getInput(mInputSource, mSampleRate, mFormat, mChannelMask, mSessionId);     if (input == 0) {        ALOGE("Could not get audio input for record source %d", mInputSource);        return BAD_VALUE;    }    int originalSessionId = mSessionId;    sp<IAudioRecord> record = audioFlinger->openRecord(input, mSampleRate, mFormat, mChannelMask, mFrameCount, &trackFlags, tid, &mSessionId, &status);    ALOGE_IF(originalSessionId != 0 && mSessionId != originalSessionId,            "session ID changed from %d to %d", originalSessionId, mSessionId);    if (record == 0 || status != NO_ERROR) {        ALOGE("AudioFlinger could not create record track, status: %d", status);        AudioSystem::releaseInput(input);        return status;    }    sp<IMemory> iMem = record->getCblk();    if (iMem == 0) {        ALOGE("Could not get control block");        return NO_INIT;    }    void *iMemPointer = iMem->pointer();    if (iMemPointer == NULL) {        ALOGE("Could not get control block pointer");        return NO_INIT;    }    if (mAudioRecord != 0) {        mAudioRecord->asBinder()->unlinkToDeath(mDeathNotifier, this);        mDeathNotifier.clear();    }    mInput = input;    mAudioRecord = record;    mCblkMemory = iMem;    audio_track_cblk_t* cblk = static_cast<audio_track_cblk_t*>(iMemPointer);    mCblk = cblk;    // FIXME missing fast track frameCount logic    mAwaitBoost = false;    if (mFlags & AUDIO_INPUT_FLAG_FAST) {        if (trackFlags & IAudioFlinger::TRACK_FAST) {            ALOGV("AUDIO_INPUT_FLAG_FAST successful; frameCount %u", mFrameCount);            mAwaitBoost = true;            // double-buffering is not required for fast tracks, due to tighter scheduling            if (mNotificationFramesAct == 0 || mNotificationFramesAct > mFrameCount) {                mNotificationFramesAct = mFrameCount;            }        } else {            ALOGV("AUDIO_INPUT_FLAG_FAST denied by server; frameCount %u", mFrameCount);            // once denied, do not request again if IAudioRecord is re-created            mFlags = (audio_input_flags_t) (mFlags & ~AUDIO_INPUT_FLAG_FAST);            if (mNotificationFramesAct == 0 || mNotificationFramesAct > mFrameCount/2) {                mNotificationFramesAct = mFrameCount/2;            }        }    }    // starting address of buffers in shared memory    void *buffers = (char*)cblk + sizeof(audio_track_cblk_t);    // update proxy    mProxy = new AudioRecordClientProxy(cblk, buffers, mFrameCount, mFrameSize);    mProxy->setEpoch(epoch);    mProxy->setMinimum(mNotificationFramesAct);    mDeathNotifier = new DeathNotifier(this);    mAudioRecord->asBinder()->linkToDeath(mDeathNotifier, this);    return NO_ERROR;}

AudioSystem::getInput跟踪实现:

audio_io_handle_t AudioSystem::getInput(audio_source_t inputSource,                                    uint32_t samplingRate,                                    audio_format_t format,                                    audio_channel_mask_t channelMask,                                    int sessionId){    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();    if (aps == 0) return 0;    return aps->getInput(inputSource, samplingRate, format, channelMask, sessionId);}

AudioSystem.cpp相关部分:

// client singleton for AudioPolicyService binder interfacesp<IAudioPolicyService> AudioSystem::gAudioPolicyService;sp<AudioSystem::AudioPolicyServiceClient> AudioSystem::gAudioPolicyServiceClient;// establish binder interface to AudioPolicy serviceconst sp<IAudioPolicyService>& AudioSystem::get_audio_policy_service(){    gLock.lock();    if (gAudioPolicyService == 0) {        sp<IServiceManager> sm = defaultServiceManager();        sp<IBinder> binder;        do {            binder = sm->getService(String16("media.audio_policy"));             if (binder != 0)                break;            ALOGW("AudioPolicyService not published, waiting...");            usleep(500000); // 0.5 s        } while (true);        if (gAudioPolicyServiceClient == NULL) {            gAudioPolicyServiceClient = new AudioPolicyServiceClient();        }        binder->linkToDeath(gAudioPolicyServiceClient);        gAudioPolicyService = interface_cast<IAudioPolicyService>(binder);        gLock.unlock();    } else {        gLock.unlock();    }    return gAudioPolicyService;}

// establish binder interface to AudioFlinger serviceconst sp<IAudioFlinger>& AudioSystem::get_audio_flinger(){    Mutex::Autolock _l(gLock);    if (gAudioFlinger == 0) {        sp<IServiceManager> sm = defaultServiceManager();        sp<IBinder> binder;        do {            binder = sm->getService(String16("media.audio_flinger"));            if (binder != 0)                break;            ALOGW("AudioFlinger not published, waiting...");            usleep(500000); // 0.5 s        } while (true);        if (gAudioFlingerClient == NULL) {            gAudioFlingerClient = new AudioFlingerClient();        } else {            if (gAudioErrorCallback) {                gAudioErrorCallback(NO_ERROR);            }        }        binder->linkToDeath(gAudioFlingerClient);        gAudioFlinger = interface_cast<IAudioFlinger>(binder);        gAudioFlinger->registerClient(gAudioFlingerClient);    }    ALOGE_IF(gAudioFlinger==0, "no AudioFlinger!?");    return gAudioFlinger;}

3.startRecording

startRecording native_start android_media_AudioRecord_start lpRecorder->start:

static int android_media_AudioRecord_start(JNIEnv *env, jobject thiz, jint event, jint triggerSession){    sp<AudioRecord> lpRecorder = getAudioRecord(env, thiz);    if (lpRecorder == NULL ) {        jniThrowException(env, "java/lang/IllegalStateException", NULL);        return AUDIORECORD_ERROR;    }    return android_media_translateRecorderErrorCode(            lpRecorder->start((AudioSystem::sync_event_t)event, triggerSession));}

start:

status_t AudioRecord::start(AudioSystem::sync_event_t event, int triggerSession){    ALOGV("start, sync event %d trigger session %d", event, triggerSession);    AutoMutex lock(mLock);    if (mActive) {        return NO_ERROR;    }    // reset current position as seen by client to 0    mProxy->setEpoch(mProxy->getEpoch() - mProxy->getPosition());    mNewPosition = mProxy->getPosition() + mUpdatePeriod;    int32_t flags = android_atomic_acquire_load(&mCblk->mFlags);    status_t status = NO_ERROR;    if (!(flags & CBLK_INVALID)) {        ALOGV("mAudioRecord->start()");        status = mAudioRecord->start(event, triggerSession);        if (status == DEAD_OBJECT) {            flags |= CBLK_INVALID;        }    }    if (flags & CBLK_INVALID) {        status = restoreRecord_l("start");    }    if (status != NO_ERROR) {        ALOGE("start() status %d", status);    } else {        mActive = true;        sp<AudioRecordThread> t = mAudioRecordThread;        if (t != 0) {            t->resume();        } else {            mPreviousPriority = getpriority(PRIO_PROCESS, 0);            get_sched_policy(0, &mPreviousSchedulingGroup);            androidSetThreadPriority(0, ANDROID_PRIORITY_AUDIO);        }    }    return status;}

4.read

read native_read_in_short_array android_media_AudioRecord_readInByteArray:

static jint android_media_AudioRecord_readInByteArray(JNIEnv *env,  jobject thiz,                                                        jbyteArray javaAudioData,                                                        jint offsetInBytes, jint sizeInBytes) {    jbyte* recordBuff = NULL;    // get the audio recorder from which we'll read new audio samples    sp<AudioRecord> lpRecorder = getAudioRecord(env, thiz);    if (lpRecorder == NULL) {        ALOGE("Unable to retrieve AudioRecord object, can't record");        return 0;    }    if (!javaAudioData) {        ALOGE("Invalid Java array to store recorded audio, can't record");        return 0;    }    // get the pointer to where we'll record the audio    // NOTE: We may use GetPrimitiveArrayCritical() when the JNI implementation changes in such    // a way that it becomes much more efficient. When doing so, we will have to prevent the    // AudioSystem callback to be called while in critical section (in case of media server    // process crash for instance)    recordBuff = (jbyte *)env->GetByteArrayElements(javaAudioData, NULL);    if (recordBuff == NULL) {        ALOGE("Error retrieving destination for recorded audio data, can't record");        return 0;    }    // read the new audio data from the native AudioRecord object    ssize_t recorderBuffSize = lpRecorder->frameCount()*lpRecorder->frameSize();    ssize_t readSize = lpRecorder->read(recordBuff + offsetInBytes,                                        sizeInBytes > (jint)recorderBuffSize ?                                            (jint)recorderBuffSize : sizeInBytes );    env->ReleaseByteArrayElements(javaAudioData, recordBuff, 0);    if (readSize < 0) {        readSize = AUDIORECORD_ERROR_INVALID_OPERATION;    }    return (jint) readSize;}

5.stop

stop native_stop android_media_AudioRecord_stop lpRecorder->stop():

static void android_media_AudioRecord_stop(JNIEnv *env, jobject thiz){    sp<AudioRecord> lpRecorder = getAudioRecord(env, thiz);    if (lpRecorder == NULL ) {        jniThrowException(env, "java/lang/IllegalStateException", NULL);        return;    }    lpRecorder->stop();    //ALOGV("Called lpRecorder->stop()");}

更多相关文章

  1. 学习Content Provider
  2. Android(安卓)性能优化之数据库优化(一)
  3. 在Android中查看和管理sqlite数据库
  4. 在Android中查看和管理sqlite数据库
  5. 【Android】数据存储之Network
  6. android 变长数据GSON解析
  7. ViewPager fragment android tab选项卡的使用
  8. android SQLite封装类
  9. Android的NDK开发(1)————Android(安卓)JNI简介与调用流程

随机推荐

  1. android 中margin,padding,border的区别
  2. 安卓开发笔记(三)android 相对布局属性
  3. Android之布局属性归纳
  4. Android Framework(I)Android Spring Jso
  5. Android中layout属性大全
  6. android总结
  7. Android 自学杂记
  8. Android Studio中快速替换styles的正则表
  9. Android 网络编程 目录
  10. Android常用UI界面设计及国际化