我正在使用ffmpeg手动解码h264 RTSP流,并尝试使用AVAssertWriter
和AVAssertWriterInput
保存未压缩的帧。
我在调用AVAssetWriterInput appendBuffer
时收到以下错误 -
错误域=AVFoundationErrorDomain Code=-11800 "无法完成操作" UserInfo={NSUnderlyingError=0x170059530 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null("}, NSLocalizedFailureReason=发生未知错误 (-12780(, NSLocalizedDescription=无法完成操作}
CMSampleBuffer
包含 BGRA 帧,如下所示 -
CMSampleBuffer 0x159d12900 retainCount: 1 allocator: 0x1b3aa3bb8
invalid = NO
dataReady = YES
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
formatDescription = <CMVideoFormatDescription 0x17405bd50 [0x1b3aa3bb8]> {
mediaType:'vide'
mediaSubType:'BGRA'
mediaSpecific: {
codecType: 'BGRA'
dimensions: 720 x 1280
}
extensions: {<CFBasicHash 0x1742652c0 [0x1b3aa3bb8]>{type = immutable dict, count = 4,
entries =>
0 : <CFString 0x1addb17c8 [0x1b3aa3bb8]>{contents = "CVImageBufferYCbCrMatrix"} = <CFString 0x1addb1808 [0x1b3aa3bb8]>{contents = "ITU_R_601_4"}
1 : <CFString 0x1addb1928 [0x1b3aa3bb8]>{contents = "CVImageBufferTransferFunction"} = <CFString 0x1addb17e8 [0x1b3aa3bb8]>{contents = "ITU_R_709_2"}
2 : <CFString 0x1adde3800 [0x1b3aa3bb8]>{contents = "CVBytesPerRow"} = <CFNumber 0xb00000000000b402 [0x1b3aa3bb8]>{value = +2880, type = kCFNumberSInt32Type}
3 : <CFString 0x1adde3880 [0x1b3aa3bb8]>{contents = "Version"} = <CFNumber 0xb000000000000022 [0x1b3aa3bb8]>{value = +2, type = kCFNumberSInt32Type}
}
}
}
sbufToTrackReadiness = 0x0
numSamples = 1
sampleTimingArray[1] = {
{PTS = {3000/90000 = 0.033}, DTS = {INVALID}, duration = {INVALID}},
}
imageBuffer = 0x17413ebe0
我也看了以下问题和答案,但它似乎并不能解释我遇到的问题(我使用的格式是受支持的像素格式(: 为什么 AVFoundation 不接受 iOS 设备上的平面像素缓冲区?
任何帮助将不胜感激!
仅供参考 - 当我保存 BGRACMSampleBuffer
我从 iPhone 相机获得时,它就可以工作,如果需要,我也可以粘贴一个示例 CMSampleBuffer。
当我发现问题时,我会回答自己 -
CMSampleBuffer
没有得到IOSurface
支持。我使用过CVPixelBufferCreateWithBytes
它创建了一个没有IOSurface支持的CVPixelBuffer
,只要我使用CVPixelBufferCreate
并传递kCVPixelBufferIOSurfacePropertiesKey键,它就可以工作了。
https://developer.apple.com/library/content/qa/qa1781/_index.html 拥有有关创建IOSurface支持的CVPixelBuffers的所有信息。