分解 Uint32 的数据字节缓冲区



我正在使用AVCaptureSession捕获音频。 在用于处理捕获的数据的回调函数中,我将流放入数据结构(字节缓冲区(中。 看起来数据是 UInt8(对于字节缓冲区有意义(,但我相信流数据是 UInt32。

我不确定我应该做以下哪一项,但我无法让其中任何一个工作。 我是否:

  1. 将数据转换为 UInt32 而不是 UInt8?
  2. 从数据中读取时,需要 4 个字节来制作 UInt32?
  3. 将我的捕获会话更改为 UInt8?
  4. 放弃数据结构
  5. 并制作自己的数据结构?

我的回调函数是:

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
var audioBufferList = AudioBufferList()
var data = Data()
var blockBuffer: CMBlockBuffer?
// Put the sample buffer in to a list of audio buffers (audioBufferList)
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout<AudioBufferList>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: 0, blockBufferOut: &blockBuffer)
// Extract the BufferList in to an array of buffers
let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
// for each buffer, extract the frame.  There should only be one buffer as we are recording in mono!
for audioBuffer in buffers {
assert(audioBuffer.mNumberChannels == 1)        // it should always be 1 for mono channel
let frame = audioBuffer.mData?.assumingMemoryBound(to: UInt8.self)
data.append(frame!, count: Int(audioBuffer.mDataByteSize) / 8)
}
// limit how much of the sample we pass through.
viewDelegate?.gotSoundData(data.prefix(MAX_POINTS))
}

所有 gotSoundData 从视图转到多个子视图进行处理

func addSamples(samples: Data) {
//if (isHidden) { return }
samples.forEach { sample in
[...process each byte...]
}
}

我可以看到Data.append有这样的定义:

mutating func append(_ bytes: UnsafePointer<UInt8>, count: Int)

> Meggar 帮助我专注于选项 4 - 使用我自己的结构 [Int16]。 如果有人对选项 1 感兴趣,请查看我稍后发现的此链接,该链接扩展了更多数据类型的数据: 往返数据的快速数字类型

回调函数:

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
var audioBufferList = AudioBufferList()
var blockBuffer: CMBlockBuffer?
// Put the sample buffer in to a list of audio buffers (audioBufferList)
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout<AudioBufferList>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: 0, blockBufferOut: &blockBuffer)
// Extract the BufferList in to an array of buffers
let audioBuffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
// For each buffer, extract the samples
for audioBuffer in audioBuffers {
let samplesCount = Int(audioBuffer.mDataByteSize) / MemoryLayout<Int16>.size
let samplesPointer = audioBufferList.mBuffers.mData!.bindMemory(to: Int16.self, capacity: samplesCount)
let samples = UnsafeMutableBufferPointer<Int16>(start: samplesPointer, count: samplesCount)
//convert to a "safe" array for ease of use in delegate.
var samplesArray:[Int16] = []
for sample in samples {
samplesArray.append(sample)
}
viewDelegate?.gotSample(samplesArray)
}        
}

消耗函数几乎保持不变:

func addSamples(samples: [Int16]) {
samples.forEach { sample in
[...process each Int16...]
}
}

最新更新