网上已有许多朋友对Android音频子系统做了透彻的分析,我这完全是给自己在做学习笔记本文基于Android Ics

AudioTrack的使用实例,在google的源码中已经为我们准备了AudioTrack的测试代码,代码路径如下:

frameworks/base/media/tests/MediaFrameworkTest/src/com/android/mediaframeworktest/functional/audio/MediaAudioTrackTest.java

源码太长了鸭梨太大了,我随便在该测试代码中抽取了几个API方法,以至于了解AudioTrack是怎么使用的,方便后面的分析。

好吧来看看testSetStereoVolumeMax这个API吧从字面意思理解应该是设置音量了,具体的实现看下面源码

//Test case 1: setStereoVolume() with max volume returns SUCCESS    @LargeTest    public void testSetStereoVolumeMax() throws Exception {        // constants for test        final String TEST_NAME = "testSetStereoVolumeMax";        final int TEST_SR = 22050;        final int TEST_CONF = AudioFormat.CHANNEL_OUT_STEREO;        final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;        final int TEST_MODE = AudioTrack.MODE_STREAM;        final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;                //-------- initialization --------------        int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);        AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT,                 minBuffSize, TEST_MODE);        byte data[] = new byte[minBuffSize/2];        //--------    test        --------------        track.write(data, 0, data.length);        track.write(data, 0, data.length);        track.play();        float maxVol = AudioTrack.getMaxVolume();        assertTrue(TEST_NAME, track.setStereoVolume(maxVol, maxVol) == AudioTrack.SUCCESS);        //-------- tear down      --------------        track.release();    }


从上面的代码我们可以知道当我们使用AudioTrack的时候第一步要做的事情就是初始化AudioTrack,初始化分两部走

1)调用getMinBufferSize get the minimum buffer size required for the successful creation of an AudioTrack

2)创建AudioTrack实例

3)用AudioTrack中的各种API去做各种事,比如说上面的track.write(data, 0, data.length)将数据写入硬件,track.play()播放音频数据

我沿着这几个艰辛的目标回到了AudioTrack这个类的实现frameworks/base/media/java/android/media/AudioTrack.java

getMinBufferSize究竟是怎么实现的,在分析它的源码之前我们需要对它的那三个参数有一定的了解
@param sampleRateInHz:采样率

------------人耳能听到大概是20Hz40000Hz吧
@param channelConfig:声道

------------CHANNEL_OUT_MONO
------------CHANNEL_CONFIGURATION_MONO

------------CHANNEL_OUT_STEREO

------------CHANNEL_CONFIGURATION_STEREO

@paramaudioFormat:音频数据格式,每个采样点所占的字节数

------------ENCODING_PCM_16BIT/*两个字节*/

------------ENCODING_PCM_8BIT/*一个字节*/

具体的还请看源代码

static public int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat) {        int channelCount = 0;        switch(channelConfig) {        case AudioFormat.CHANNEL_OUT_MONO:        case AudioFormat.CHANNEL_CONFIGURATION_MONO:            channelCount = 1;            break;        case AudioFormat.CHANNEL_OUT_STEREO:        case AudioFormat.CHANNEL_CONFIGURATION_STEREO:/*Double track  */            channelCount = 2;            break;        default:            loge("getMinBufferSize(): Invalid channel configuration.");            return AudioTrack.ERROR_BAD_VALUE;        }        /*Each sample point how many bytes */        if ((audioFormat != AudioFormat.ENCODING_PCM_16BIT)             && (audioFormat != AudioFormat.ENCODING_PCM_8BIT)) {            loge("getMinBufferSize(): Invalid audio format.");            return AudioTrack.ERROR_BAD_VALUE;        }                if ( (sampleRateInHz < 4000) || (sampleRateInHz > 48000) ) {            loge("getMinBufferSize(): " + sampleRateInHz +"Hz is not a supported sample rate.");            return AudioTrack.ERROR_BAD_VALUE;        }                int size = native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);        if ((size == -1) || (size == 0)) {            loge("getMinBufferSize(): error querying hardware");            return AudioTrack.ERROR;        }        else {            return size;        }    }

最后到哪去了,最后又转向JNI去了
getMinBufferSize方法在对三个参数解析完后开始通过JNI调用native_get_min_buff_size本地方法去具体的申请这个最小的buffer

再回到这个本地方法的实现部分,native_get_min_buff_size本地方法的实现代码路径位于framework/base/core/jni/android_media_track.cpp

static JNINativeMethod gMethods[] = {    // name,              signature,     funcPtr    {"native_get_min_buff_size",                             "(III)I",   (void *)android_media_AudioTrack_get_min_buff_size},};

由上面这个方法列表再具体定位到(void *)android_media_AudioTrack_get_min_buff_size,本地方法中去,代码如下

/*  *returns the minimum required size for the successful creation of a streaming AudioTrack  *returns -1 if there was an error querying the hardware.*/static jint android_media_AudioTrack_get_min_buff_size(JNIEnv *env,  jobject thiz,    jint sampleRateInHertz, jint nbChannels, jint audioFormat) {    int frameCount = 0;    if (AudioTrack::getMinFrameCount(&frameCount, AUDIO_STREAM_DEFAULT,            sampleRateInHertz) != NO_ERROR) {        return -1;    }    return frameCount * nbChannels * (audioFormat == javaAudioTrackFields.PCM16 ? 2 : 1);}

上面这段本地代码告诉了我那个最小的buffer区域的大小的计算公式,并且在这个方法中引入了另一重要的变量frameCount,暂时不知道是什么意思,但这并不影响我分析这段代码,代码告诉我们buffer size = frameCount * 声道个数(单声道 or 双声道?1或2) * 音频数据格式所占的字节(2字节或1字节)

好吧那么让我再踏上分析frameCount由来的历程吧,根据字面意思是帧计数的意思,再跳到getMinFrameCount方法的实现部分代码去看看不过在分析status_t AudioTrack::getMinFrameCount(
        int* frameCount,
        int streamType,
        uint32_t sampleRate)这个蛋痛的方法前还先来一段小插曲吧:Android系统中的声音流类型,即Audio Stream Type,定义在system/core/include/system/audio.h中

/*   *Audio stream types */typedef enum {    AUDIO_STREAM_DEFAULT          = -1,    AUDIO_STREAM_VOICE_CALL       = 0,/*电话声音*/    AUDIO_STREAM_SYSTEM           = 1,  /*系统声音*/    AUDIO_STREAM_RING             = 2,/*铃   音*/    AUDIO_STREAM_MUSIC            = 3,/*音乐声音*/    AUDIO_STREAM_ALARM            = 4,/*警告声音*/    AUDIO_STREAM_NOTIFICATION     = 5,/*通知声音*/    AUDIO_STREAM_BLUETOOTH_SCO    = 6,/*bluetooth*/    AUDIO_STREAM_ENFORCED_AUDIBLE = 7, /* Sounds that cannot be muted by user and must be routed to speaker */    AUDIO_STREAM_DTMF             = 8,    AUDIO_STREAM_TTS              = 9,    AUDIO_STREAM_CNT,    AUDIO_STREAM_MAX              = AUDIO_STREAM_CNT - 1,} audio_stream_type_t;
这个参数和Android中的AudioManager有关系的,举个例子吧,当我们在听music的时候突然来了一个电话,此刻音乐没了,你能听见的是来电的声音当我们去调节音量的时候

此刻的调节只会对接电话起作用,这个参数对AudioTrack来说,它的含义就是告诉系统,我现在想使用的是哪种类型的声音,这样系统可以更好的对他们进行分类管理。

好了我们回到getMinFrameCount源码片段

/*static  *static */status_t AudioTrack::getMinFrameCount(        int* frameCount,        int streamType,        uint32_t sampleRate){    int afSampleRate;    if (AudioSystem::getOutputSamplingRate(&afSampleRate, streamType) != NO_ERROR) {        return NO_INIT;    }    int afFrameCount;    if (AudioSystem::getOutputFrameCount(&afFrameCount, streamType) != NO_ERROR) {        return NO_INIT;    }    uint32_t afLatency;    if (AudioSystem::getOutputLatency(&afLatency, streamType) != NO_ERROR) {        return NO_INIT;    }    // Ensure that buffer depth covers at least audio hardware latency    uint32_t minBufCount = afLatency / ((1000 * afFrameCount) / afSampleRate);    if (minBufCount < 2) minBufCount = 2;    *frameCount = (sampleRate == 0) ? afFrameCount * minBufCount :              afFrameCount * minBufCount * sampleRate / afSampleRate;    return NO_ERROR;}

经过查看代码真的发现太恐怖了,层层调用不知在何方才能见光明,唉,此乃一条不归路啊,总的来说反正谷歌只管更新源码,却不管开发人员的死活,再说了累死人他们有不要填命的。

getMinFrameCount()方法最终返回了我们梦寐以求的frameCount 引用计数指针。可是它的实现过程却是很复杂,getMinFrameCount方法会根据streamType的类型通过AudioSystem提供的几个AP去获取afSampleRate,afFrameCount,afLatency 这三个指标,至于这三个指标,我的理解也有点模糊,望高手指点,然后又根据这三个指标去计算最小的buff计数,最后根据buf最小计数计算出帧计数(frameCount)。另外一个音频帧的理解,一个音频帧即frame=一个采样点所占的字节数 * 声道数。最后根据frameCount再回到过去计算我们创建AudioTrack所需要申请的最小缓冲区的bufferSize。这样用户分配最小的缓冲区就有依据了

至此我们还是完成了地一个目标,下一步就是第二个目标AudioTrack的创建过程

让我们又回到代码的原点,从心开始吧,继续回到MediaAudioTrackTest.java中的testSetStereoVolumeMax方法查看newAudioTrack()的实现过程

AudioTrack类的构造函数原型

/**     * Class constructor.     * @param streamType the type of the audio stream. See     *   {@link AudioManager#STREAM_VOICE_CALL}, {@link AudioManager#STREAM_SYSTEM},     *   {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_MUSIC} and     *   {@link AudioManager#STREAM_ALARM}     * @param sampleRateInHz the sample rate expressed in Hertz. Examples of rates are (but     *   not limited to) 44100, 22050 and 11025.     * @param channelConfig describes the configuration of the audio channels.     *   See {@link AudioFormat#CHANNEL_OUT_MONO} and     *   {@link AudioFormat#CHANNEL_OUT_STEREO}     * @param audioFormat the format in which the audio data is represented.     *   See {@link AudioFormat#ENCODING_PCM_16BIT} and     *   {@link AudioFormat#ENCODING_PCM_8BIT}     * @param bufferSizeInBytes the total size (in bytes) of the buffer where audio data is read     *   from for playback. If using the AudioTrack in streaming mode, you can write data into     *   this buffer in smaller chunks than this size. If using the AudioTrack in static mode,     *   this is the maximum size of the sound that will be played for this instance.     *   See {@link #getMinBufferSize(int, int, int)} to determine the minimum required buffer size     *   for the successful creation of an AudioTrack instance in streaming mode. Using values     *   smaller than getMinBufferSize() will result in an initialization failure.     * @param mode streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}     * @throws java.lang.IllegalArgumentException     */    public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat,            int bufferSizeInBytes, int mode)    throws IllegalArgumentException {        this(streamType, sampleRateInHz, channelConfig, audioFormat,                bufferSizeInBytes, mode, 0);    }

上面的注释很给力,它的实现继承了AudioTrack的另一个构造方法,如下:

/** *return AudioTrack  */ public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat,            int bufferSizeInBytes, int mode, int sessionId)    throws IllegalArgumentException {        mState = STATE_UNINITIALIZED;                /*获得主线程的Looper 这个好久没敲代码忘记了。。*/        if ((mInitializationLooper = Looper.myLooper()) == null) {            mInitializationLooper = Looper.getMainLooper();        }/* *对上面我们传入的那些参数做一个检测 *streamType = AudioManager.STREAM_MUSIC 音乐声 *sampleRateInHz = 22050采样率 22050 Hz *channelConfig = CHANNEL_OUT_STEREO    双声道 *audioFormat = AudioFormat.ENCODING_PCM_16BIT 音频数据格式 2字节的 *mode = MODE_STREAM  *bufferSizeInBytes 这个就是我们上面千辛万苦得来的*/        audioParamCheck(streamType, sampleRateInHz, channelConfig, audioFormat, mode);        audioBuffSizeCheck(bufferSizeInBytes);        if (sessionId < 0) {            throw (new IllegalArgumentException("Invalid audio session ID: "+sessionId));        }。。。。。。。。。。。。。。。。。。        /*调用本地方法 参数列表就是上面注释部分的那些*/        int initResult = native_setup(new WeakReference(this),                mStreamType, mSampleRate, mChannels, mAudioFormat,                mNativeBufferSizeInBytes, mDataLoadMode, session);        if (initResult != SUCCESS) {            loge("Error code "+initResult+" when initializing AudioTrack.");            return; // with mState == STATE_UNINITIALIZED        }。。。。。。。。。。。。。。           }

AudioTrackde 的构造方法最后调用了frameworks/base/core/jni/android_media_AudioTrack.cpp中的native_setup本地方法,其中native_setup又被映射成(void *)android_media_AudioTrack_native_setup

/** *gMethods */static JNINativeMethod gMethods[] = {    // name,              signature,     funcPtr    {"native_start",         "()V",      (void *)android_media_AudioTrack_start},    {"native_stop",          "()V",      (void *)android_media_AudioTrack_stop},    {"native_pause",         "()V",      (void *)android_media_AudioTrack_pause},    {"native_flush",         "()V",      (void *)android_media_AudioTrack_flush},    {"native_setup",         "(Ljava/lang/Object;IIIIII[I)I",                                         (void *)android_media_AudioTrack_native_setup},}
最后经过android_media_AudioTrack_native_setup本地方法做了一大堆的判断和处理返回AUDIOTRACK_SUCCESS,表示构建AudioTrack成功,真是悲剧.
/*  *android_media_AudioTrack_native_setup*/static intandroid_media_AudioTrack_native_setup(JNIEnv *env, jobject thiz, jobject weak_this,        jint streamType, jint sampleRateInHertz, jint javaChannelMask,        jint audioFormat, jint buffSizeInBytes, jint memoryMode, jintArray jSession){   。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。    // compute the frame count    int bytesPerSample = audioFormat == javaAudioTrackFields.PCM16 ? 2 : 1;    int format = audioFormat == javaAudioTrackFields.PCM16 ?             AUDIO_FORMAT_PCM_16_BIT : AUDIO_FORMAT_PCM_8_BIT;    int frameCount = buffSizeInBytes / (nbChannels * bytesPerSample);        AudioTrackJniStorage* lpJniStorage = new AudioTrackJniStorage();        // initialize the callback information:    // this data will be passed with every AudioTrack callback    jclass clazz = env->GetObjectClass(thiz);    if (clazz == NULL) {        LOGE("Can't find %s when setting up callback.", kClassPathName);        delete lpJniStorage;        return AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;    }    lpJniStorage->mCallbackData.audioTrack_class = (jclass)env->NewGlobalRef(clazz);    // we use a weak reference so the AudioTrack object can be garbage collected.    lpJniStorage->mCallbackData.audioTrack_ref = env->NewGlobalRef(weak_this);        lpJniStorage->mStreamType = atStreamType;    if (jSession == NULL) {        LOGE("Error creating AudioTrack: invalid session ID pointer");        delete lpJniStorage;        return AUDIOTRACK_ERROR;    }    jint* nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);    if (nSession == NULL) {        LOGE("Error creating AudioTrack: Error retrieving session id pointer");        delete lpJniStorage;        return AUDIOTRACK_ERROR;    }    int sessionId = nSession[0];    env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);    nSession = NULL;    /*最终的目标音频系统的播放接口由AudioTrack提供每一个音频都会对应一个AudioTrack实例,这里请注意这个实例是本地方法new出来的而非上层AP*/    AudioTrack* lpTrack = new AudioTrack();    if (lpTrack == NULL) {        LOGE("Error creating uninitialized AudioTrack");        goto native_track_failure;    }        // initialize the native AudioTrack object    if (memoryMode == javaAudioTrackFields.MODE_STREAM) {        lpTrack->set(/*这个方法会引出AudioFlinger对象的出现*/            atStreamType,// stream type            sampleRateInHertz,            format,// word length, PCM            nativeChannelMask,            frameCount,            0,// flags            audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user)            0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack            0,// shared mem            true,// thread can call Java            sessionId);// audio session ID                } else if (memoryMode == javaAudioTrackFields.MODE_STATIC) {        // AudioTrack is using shared memory                if (!lpJniStorage->allocSharedMem(buffSizeInBytes)) {            LOGE("Error creating AudioTrack in static mode: error creating mem heap base");            goto native_init_failure;        }                lpTrack->set(            atStreamType,// stream type            sampleRateInHertz,            format,// word length, PCM            nativeChannelMask,            frameCount,            0,// flags            audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user));            0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack             lpJniStorage->mMemBase,// shared mem            true,// thread can call Java            sessionId);// audio session ID    }    /*在上面的set方法中会调用本地AudioTrack的creatTrack函数AudioFlinger会根据传入的frameCount参数申请一块共享内存*/    if (lpTrack->initCheck() != NO_ERROR) {        LOGE("Error initializing AudioTrack");        goto native_init_failure;    }    nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);    if (nSession == NULL) {        LOGE("Error creating AudioTrack: Error retrieving session id pointer");        goto native_init_failure;    }    // read the audio session ID back from AudioTrack in case we create a new session    nSession[0] = lpTrack->getSessionId();    env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);    nSession = NULL;    // save our newly created C++ AudioTrack in the "nativeTrackInJavaObj" field     // of the Java object (in mNativeTrackInJavaObj)    env->SetIntField(thiz, javaAudioTrackFields.nativeTrackInJavaObj, (int)lpTrack);        // save the JNI resources so we can free them later    //LOGV("storing lpJniStorage: %x\n", (int)lpJniStorage);    env->SetIntField(thiz, javaAudioTrackFields.jniData, (int)lpJniStorage);    /*最后将本地AudioTrack和上次AP的AudioTrack相互关联,并且通过AudioFlinger对这块共享内存进行操作,从而进行数据的交互*/    return AUDIOTRACK_SUCCESS;    。。。。。。。。。}

留点悬念吧,AudioTrack的历程还没结束,随着AudioFlinger的出现会引发出一系列的血案的

更多相关文章

  1. Android Native代码中的status_t定义
  2. 3.6.3新版本AndroidStudio报Could not resolve all artifacts fo
  3. Android 混淆代码问题总结
  4. 在android中显示网络图片及查看页面源代码
  5. Android中Market的Loading效果实现方法
  6. android:TabHost使用方法
  7. Git点赞82K!字节跳动保姆级Android学习指南,干货满满
  8. Android内核开发:理解和掌握repo工具(含被墙后的下载方法)

随机推荐

  1. C语言中main函数可以在什么位置
  2. 在一个C语言程序中,main函数可以在任何地
  3. c语言代码如何实现贪吃蛇动画
  4. C和C++有什么区别
  5. putchar函数在C语言中是什么意思
  6. 三分钟了解C语言中自定义的标识符及规则
  7. C语言中“\n”是什么意思
  8. 带你了解C语言中的Sleep函数(附代码)
  9. C语言中“||”是什么意思
  10. c语言关系运算符号有哪些?