水平计量与AVAudioEngine



我刚刚在AVAudioEngine上观看了WWDC视频(Session 502 AVAudioEngine in Practice),我非常兴奋能够基于这项技术制作一个应用程序。

我还没能弄清楚如何对麦克风输入或混音器输出进行电平监控。

有人能帮忙吗?需要说明的是,我说的是监控当前输入信号(并在UI中显示),而不是通道/音轨的输入/输出音量设置。

我知道你可以用AVAudioRecorder这样做,但这不是AVAudioEngine需要的AVAudioNode

尝试在主混音器上安装一个水龙头,然后通过设置帧长使其更快,然后读取样本并获得平均值,如下所示:

导入框架

#import <Accelerate/Accelerate.h>

添加属性

@property float averagePowerForChannel0;
@property float averagePowerForChannel1;

则下面相同>>

self.mainMixer = [self.engine mainMixerNode];
[self.mainMixer installTapOnBus:0 bufferSize:1024 format:[self.mainMixer outputFormatForBus:0] block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
    [buffer setFrameLength:1024];
    UInt32 inNumberFrames = buffer.frameLength;
    if(buffer.format.channelCount>0)
    {
        Float32* samples = (Float32*)buffer.floatChannelData[0];
        Float32 avgValue = 0;
        vDSP_meamgv((Float32*)samples, 1, &avgValue, inNumberFrames);
        self.averagePowerForChannel0 = (LEVEL_LOWPASS_TRIG*((avgValue==0)?-100:20.0*log10f(avgValue))) + ((1-LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0) ;
        self.averagePowerForChannel1 = self.averagePowerForChannel0;
    }
    if(buffer.format.channelCount>1)
    {
        Float32* samples = (Float32*)buffer.floatChannelData[1];
        Float32 avgValue = 0;
        vDSP_meamgv((Float32*)samples, 1, &avgValue, inNumberFrames);
        self.averagePowerForChannel1 = (LEVEL_LOWPASS_TRIG*((avgValue==0)?-100:20.0*log10f(avgValue))) + ((1-LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel1) ;
    }
}];

然后,得到你想要的目标值

NSLog(@"===test===%.2f", self.averagePowerForChannel1);

获取峰值,使用vDSP_maxmgv而不是vDSP_meamgv。


LEVEL_LOWPASS_TRIG是一个值在0.0到1.0之间的简单过滤器,如果你设置0.0,你将过滤所有值而不获得任何数据。如果你把它设置为1.0,你会得到太多的噪音。基本上,该值越高,数据的变化就越大。对于大多数应用程序来说,0.10到0.30之间的值似乎比较合适。

'Farhad Malekpour'的等效Swift 3代码的回答

导入框架

import Accelerate

宣布在全球范围内

private var audioEngine: AVAudioEngine?
    private var averagePowerForChannel0: Float = 0
    private var averagePowerForChannel1: Float = 0
let LEVEL_LOWPASS_TRIG:Float32 = 0.30

在需要的地方使用下面的代码

let inputNode = audioEngine!.inputNode//since i need microphone audio level i have used `inputNode` otherwise you have to use `mainMixerNode`
let recordingFormat: AVAudioFormat = inputNode!.outputFormat(forBus: 0)
 inputNode!.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) {[weak self] (buffer:AVAudioPCMBuffer, when:AVAudioTime) in
                guard let strongSelf = self else {
                    return
                }
                strongSelf.audioMetering(buffer: buffer)
}

计算
private func audioMetering(buffer:AVAudioPCMBuffer) {
            buffer.frameLength = 1024
            let inNumberFrames:UInt = UInt(buffer.frameLength)
            if buffer.format.channelCount > 0 {
                let samples = (buffer.floatChannelData![0])
                var avgValue:Float32 = 0
                vDSP_meamgv(samples,1 , &avgValue, inNumberFrames)
                var v:Float = -100
                if avgValue != 0 {
                    v = 20.0 * log10f(avgValue)
                }
                self.averagePowerForChannel0 = (self.LEVEL_LOWPASS_TRIG*v) + ((1-self.LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0)
                self.averagePowerForChannel1 = self.averagePowerForChannel0
            }
            if buffer.format.channelCount > 1 {
                let samples = buffer.floatChannelData![1]
                var avgValue:Float32 = 0
                vDSP_meamgv(samples, 1, &avgValue, inNumberFrames)
                var v:Float = -100
                if avgValue != 0 {
                    v = 20.0 * log10f(avgValue)
                }
                self.averagePowerForChannel1 = (self.LEVEL_LOWPASS_TRIG*v) + ((1-self.LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel1)
            }
    }

Swift 5+

我从这个项目中得到了帮助。

  1. 下载以上项目&

  2. 复制粘贴这些代码到你的项目中:

    import AVFoundation
    private var mic = MicrophoneMonitor(numberOfSamples: 1)
    private var timer:Timer!
    override func viewDidLoad() {
        super.viewDidLoad()
        timer = Timer.scheduledTimer(timeInterval: 0.1, target: self, selector: #selector(startMonitoring), userInfo: nil, repeats: true)
        timer.fire()
    }
    @objc func startMonitoring() {
      print("sound level:", normalizeSoundLevel(level: mic.soundSamples.first!))
    }
    private func normalizeSoundLevel(level: Float) -> CGFloat {
        let level = max(0.2, CGFloat(level) + 50) / 2 // between 0.1 and 25
        return CGFloat(level * (300 / 25)) // scaled to max at 300 (our height of our bar)
    }
    

3。开瓶啤酒庆祝!

我发现了另一个有点奇怪的解决方案,但效果很好,比tap要好得多。混频器没有AudioUnit,但如果你将它转换为AVAudioIONode,你可以获得AudioUnit并使用iOS的计量功能。方法如下:

启用或禁用计量:

- (void)setMeteringEnabled:(BOOL)enabled;
{
    UInt32 on = (enabled)?1:0;
    AVAudioIONode *node = (AVAudioIONode*)self.engine.mainMixerNode;
    OSStatus err = AudioUnitSetProperty(node.audioUnit, kAudioUnitProperty_MeteringMode, kAudioUnitScope_Output, 0, &on, sizeof(on));
}

更新仪表:

- (void)updateMeters;
{
    AVAudioIONode *node = (AVAudioIONode*)self.engine.mainMixerNode;
    AudioUnitParameterValue level;
    AudioUnitGetParameter(node.audioUnit, kMultiChannelMixerParam_PostAveragePower, kAudioUnitScope_Output, 0, &level);
    self.averagePowerForChannel1 = self.averagePowerForChannel0 = level;
    if(self.numberOfChannels>1)
    {
        err = AudioUnitGetParameter(node.audioUnit, kMultiChannelMixerParam_PostAveragePower+1, kAudioUnitScope_Output, 0, &level);
    }
}
#define LEVEL_LOWPASS_TRIG .3
#import "AudioRecorder.h"


@implementation AudioRecord

-(id)init {
     self = [super init];
     self.recordEngine = [[AVAudioEngine alloc] init];
     return self;
}

 /**  ----------------------  Snippet Stackoverflow.com not including Audio Level Meter    ---------------------     **/

-(BOOL)recordToFile:(NSString*)filePath {
     NSURL *fileURL = [NSURL fileURLWithPath:filePath];
     const Float64 sampleRate = 44100;
     AudioStreamBasicDescription aacDesc = { 0 };
     aacDesc.mSampleRate = sampleRate;
     aacDesc.mFormatID = kAudioFormatMPEG4AAC; 
     aacDesc.mFramesPerPacket = 1024;
     aacDesc.mChannelsPerFrame = 2;
     ExtAudioFileRef eaf;
     OSStatus err = ExtAudioFileCreateWithURL((__bridge CFURLRef)fileURL, kAudioFileAAC_ADTSType, &aacDesc, NULL, kAudioFileFlags_EraseFile, &eaf);
     assert(noErr == err);
     AVAudioInputNode *input = self.recordEngine.inputNode;
     const AVAudioNodeBus bus = 0;
     AVAudioFormat *micFormat = [input inputFormatForBus:bus];
     err = ExtAudioFileSetProperty(eaf, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), micFormat.streamDescription);
     assert(noErr == err);
     [input installTapOnBus:bus bufferSize:1024 format:micFormat block:^(AVAudioPCMBuffer *buffer, AVAudioTime *when) {
       const AudioBufferList *abl = buffer.audioBufferList;
       OSStatus err = ExtAudioFileWrite(eaf, buffer.frameLength, abl);
       assert(noErr == err);

       /**  ----------------------  Snippet from stackoverflow.com in different context  ---------------------     **/

       UInt32 inNumberFrames = buffer.frameLength;
       if(buffer.format.channelCount>0) {
         Float32* samples = (Float32*)buffer.floatChannelData[0]; 
         Float32 avgValue = 0;
         vDSP_maxv((Float32*)samples, 1.0, &avgValue, inNumberFrames);
         self.averagePowerForChannel0 = (LEVEL_LOWPASS_TRIG*((avgValue==0)?
                                  -100:20.0*log10f(avgValue))) + ((1- LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0) ;
         self.averagePowerForChannel1 = self.averagePowerForChannel0;
       }
       dispatch_async(dispatch_get_main_queue(), ^{
         self.levelIndicator.floatValue=self.averagePowerForChannel0;
       });     

       /**  ---------------------- End of Snippet from stackoverflow.com in different context  ---------------------     **/
     }];
     BOOL startSuccess;
     NSError *error;
     startSuccess = [self.recordEngine startAndReturnError:&error]; 
     return startSuccess;
}

@end
#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
#import <AudioToolbox/ExtendedAudioFile.h>
#import <CoreAudio/CoreAudio.h>
#import <Accelerate/Accelerate.h>
#import <AppKit/AppKit.h>
@interface AudioRecord : NSObject {
}
@property (nonatomic) AVAudioEngine *recordEngine;

@property float averagePowerForChannel0;
@property float averagePowerForChannel1;
@property float numberOfChannels;
@property NSLevelIndicator * levelIndicator;

-(BOOL)recordToFile:(NSString*)filePath;
@end

相关内容

  • 没有找到相关文章

最新更新