使用CallKit的VOIP应用程序中的音频短路



我正在使用SpeakerBox应用程序作为我的VOIP应用程序的基础。我已经设法让一切正常工作,但我似乎无法摆脱从麦克风到设备扬声器的音频"短路"。

换句话说,当我打电话时,我能从扬声器中听到自己的声音,也能听到对方的声音。我该如何更改?

AVAudioSession设置:

    AVAudioSession *sessionInstance = [AVAudioSession sharedInstance];
    NSError *error = nil;
    [sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
    XThrowIfError((OSStatus)error.code, "couldn't set session's audio category");
    [sessionInstance setMode:AVAudioSessionModeVoiceChat error:&error];
    XThrowIfError((OSStatus)error.code, "couldn't set session's audio mode");
    NSTimeInterval bufferDuration = .005;
    [sessionInstance setPreferredIOBufferDuration:bufferDuration error:&error];
    XThrowIfError((OSStatus)error.code, "couldn't set session's I/O buffer duration");
    [sessionInstance setPreferredSampleRate:44100 error:&error];
    XThrowIfError((OSStatus)error.code, "couldn't set session's preferred sample rate");

IO单元设置:

- (void)setupIOUnit
{
try {
    // Create a new instance of Apple Voice Processing IO
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    AudioComponent comp = AudioComponentFindNext(NULL, &desc);
    XThrowIfError(AudioComponentInstanceNew(comp, &_rioUnit), "couldn't create a new instance of Apple Voice Processing IO");
    //  Enable input and output on Apple Voice Processing IO
    //  Input is enabled on the input scope of the input element
    //  Output is enabled on the output scope of the output element
    UInt32 one = 1;
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one)), "could not enable input on Apple Voice Processing IO");
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one)), "could not enable output on Apple Voice Processing IO");
    // Explicitly set the input and output client formats
    // sample rate = 44100, num channels = 1, format = 32 bit floating point
    CAStreamBasicDescription ioFormat = CAStreamBasicDescription(44100, 1, CAStreamBasicDescription::kPCMFormatFloat32, false);
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &ioFormat, sizeof(ioFormat)), "couldn't set the input client format on Apple Voice Processing IO");
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioFormat, sizeof(ioFormat)), "couldn't set the output client format on Apple Voice Processing IO");
    // Set the MaximumFramesPerSlice property. This property is used to describe to an audio unit the maximum number
    // of samples it will be asked to produce on any single given call to AudioUnitRender
    UInt32 maxFramesPerSlice = 4096;
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO");
    // Get the property value back from Apple Voice Processing IO. We are going to use this value to allocate buffers accordingly
    UInt32 propSize = sizeof(UInt32);
    XThrowIfError(AudioUnitGetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize), "couldn't get max frames per slice on Apple Voice Processing IO");
    // We need references to certain data in the render callback
    // This simple struct is used to hold that information
    cd.rioUnit = _rioUnit;
    cd.muteAudio = &_muteAudio;
    cd.audioChainIsBeingReconstructed = &_audioChainIsBeingReconstructed;
    // Set the render callback on Apple Voice Processing IO
    AURenderCallbackStruct renderCallback;
    renderCallback.inputProc = performRender;
    renderCallback.inputProcRefCon = NULL;
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &renderCallback, sizeof(renderCallback)), "couldn't set render callback on Apple Voice Processing IO");
    // Initialize the Apple Voice Processing IO instance
    XThrowIfError(AudioUnitInitialize(_rioUnit), "couldn't initialize Apple Voice Processing IO instance");
}
catch (CAXException &e) {
    NSLog(@"Error returned from setupIOUnit: %d: %s", (int)e.mError, e.mOperation);
}
catch (...) {
    NSLog(@"Unknown error returned from setupIOUnit");
}
return;
}

启动IOUnit:

NSError *error = nil;
[[AVAudioSession sharedInstance] setActive:YES error:&error];
if (nil != error) NSLog(@"AVAudioSession set active (TRUE) failed with error: %@", error);
OSStatus err = AudioOutputUnitStart(_rioUnit);
if (err) NSLog(@"couldn't start Apple Voice Processing IO: %d", (int)err);
return err;

停止IOUnit

NSError *error = nil;
[[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error];
if (nil != error) NSLog(@"AVAudioSession set active (FALSE) failed with error: %@", error);
OSStatus err = AudioOutputUnitStop(_rioUnit);
if (err) NSLog(@"couldn't stop Apple Voice Processing IO: %d", (int)err);
return err;

我使用PJSIP作为我的SIP堆栈,并且有一个Asterisk服务器。问题必须是客户端,因为我们也有一个基于Android的PJSIP实现,没有这个问题。

我使用WebRTC遇到了同样的问题。我最终得出的结论是,你不应该在AudioController.mm中设置IOUnit,而是把它留给PJSIP(在我的情况下是WebRTC(。

快速解决方法如下:注释AudioController.mm的setupAudioChain中的[self setupIOUnit];ProviderDelegate.swiftdidActivate audioSession中的startAudio()

最新更新