音频录制音频队列开始缓冲区从未填充



我正在使用AudioQueueStart开始在iOS设备上录制,我希望所有录制数据都以缓冲区的形式流式传输给我,以便我可以处理它们并将它们发送到服务器。

基本功能效果很好,但是在我的 BufferFill 函数中,我通常在每次调用时<10 字节的数据。这感觉效率很低。特别是因为我尝试将缓冲区大小设置为 16384 btyes(请参阅开始录制方法的开头(

如何在调用 BufferFill 之前让它更多地填充缓冲区?还是我需要在发送到服务器之前进行第二阶段缓冲以实现我想要的?

OSStatus BufferFilled(void *aqData, SInt64 inPosition, UInt32 requestCount, const void *inBuffer, UInt32 *actualCount) {
AQRecorderState *pAqData = (AQRecorderState*)aqData;
NSData *audioData = [NSData dataWithBytes:inBuffer length:requestCount];
*actualCount = inBuffer + requestCount;
//audioData is ususally < 10 bytes, sometimes 100 bytes but never close to 16384 bytes    

return 0;
}
void HandleInputBuffer(void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc) {
AQRecorderState *pAqData = (AQRecorderState*)aqData;
if (inNumPackets == 0 && pAqData->mDataFormat.mBytesPerPacket != 0)
inNumPackets = inBuffer->mAudioDataByteSize / pAqData->mDataFormat.mBytesPerPacket;
if(AudioFileWritePackets(pAqData->mAudioFile, false, inBuffer->mAudioDataByteSize, inPacketDesc, pAqData->mCurrentPacket, &inNumPackets, inBuffer->mAudioData) == noErr) {
pAqData->mCurrentPacket += inNumPackets;
}
if (pAqData->mIsRunning == 0)
return;
OSStatus error = AudioQueueEnqueueBuffer(pAqData->mQueue, inBuffer, 0, NULL);
}

void DeriveBufferSize(AudioQueueRef audioQueue, AudioStreamBasicDescription *ASBDescription, Float64 seconds, UInt32 *outBufferSize) {
static const int maxBufferSize = 0x50000;
int maxPacketSize = ASBDescription->mBytesPerPacket;
if (maxPacketSize == 0) {
UInt32 maxVBRPacketSize = sizeof(maxPacketSize);
AudioQueueGetProperty(audioQueue, kAudioQueueProperty_MaximumOutputPacketSize, &maxPacketSize, &maxVBRPacketSize);
}
Float64 numBytesForTime = ASBDescription->mSampleRate * maxPacketSize * seconds;
*outBufferSize = (UInt32)(numBytesForTime < maxBufferSize ? numBytesForTime : maxBufferSize);
}
OSStatus SetMagicCookieForFile (AudioQueueRef inQueue, AudioFileID   inFile) {
OSStatus result = noErr;
UInt32 cookieSize;
if (AudioQueueGetPropertySize (inQueue, kAudioQueueProperty_MagicCookie, &cookieSize) == noErr) {
char* magicCookie =
(char *) malloc (cookieSize);
if (AudioQueueGetProperty (inQueue, kAudioQueueProperty_MagicCookie, magicCookie, &cookieSize) == noErr)
result = AudioFileSetProperty (inFile, kAudioFilePropertyMagicCookieData, cookieSize, magicCookie);
free(magicCookie);
}
return result;
}

- (void)startRecording {
aqData.mDataFormat.mFormatID         = kAudioFormatMPEG4AAC;
aqData.mDataFormat.mSampleRate       = 22050.0;
aqData.mDataFormat.mChannelsPerFrame = 1;
aqData.mDataFormat.mBitsPerChannel   = 0;
aqData.mDataFormat.mBytesPerPacket   = 0;
aqData.mDataFormat.mBytesPerFrame    = 0;
aqData.mDataFormat.mFramesPerPacket  = 1024;
aqData.mDataFormat.mFormatFlags      = kMPEG4Object_AAC_Main;
AudioFileTypeID fileType             = kAudioFileAAC_ADTSType;
aqData.bufferByteSize = 16384;

UInt32 defaultToSpeaker = TRUE;
AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof(defaultToSpeaker), &defaultToSpeaker);
OSStatus status = AudioQueueNewInput(&aqData.mDataFormat, HandleInputBuffer, &aqData, NULL, kCFRunLoopCommonModes, 0, &aqData.mQueue);
UInt32 dataFormatSize = sizeof (aqData.mDataFormat);      
status = AudioQueueGetProperty(aqData.mQueue, kAudioQueueProperty_StreamDescription, &aqData.mDataFormat, &dataFormatSize);
status = AudioFileInitializeWithCallbacks(&aqData, nil, BufferFilled, nil, nil, fileType, &aqData.mDataFormat, 0, &aqData.mAudioFile);
for (int i = 0; i < kNumberBuffers; ++i) {
status = AudioQueueAllocateBuffer (aqData.mQueue, aqData.bufferByteSize, &aqData.mBuffers[i]);
status = AudioQueueEnqueueBuffer (aqData.mQueue, aqData.mBuffers[i], 0, NULL);
}
aqData.mCurrentPacket = 0;                           
aqData.mIsRunning = true;                            
status = AudioQueueStart(aqData.mQueue, NULL);
}

更新:我已经记录了我收到的数据,这很有趣,几乎一半的"数据包"是某种标头,一半是声音数据。我可以假设这就是iOS上的AAC编码的工作方式吗?它在一个缓冲区中写入标头,然后在下一个缓冲区中写入数据,依此类推。而且它永远不会希望每个数据块超过 170-180 字节,这就是为什么它忽略我的大缓冲区?

我最终解决了这个问题。事实证明,是的,iOS上的编码会产生大大小小的数据块。我自己使用 NSMutableData 添加了第二阶段缓冲区,它运行良好。

最新更新