记忆警告,试图在Swift中合并多个视频文件



我试图使用Swift合并2个视频。但是,当我尝试运行此代码时,我会收到内存警告,有时会发生崩溃。

我的直觉是,由于某种原因,我尽早退出了dispatch_group并完成了作品。

但是,我也注意到有时候我没有那么远。

我还注意到我的样品。有时会很奇怪,这似乎很奇怪,因为视频不超过30秒。

我被困在哪里开始解决此问题TBH。任何指针都赞赏。

 dispatch_group_enter(self.videoProcessingGroup)               
                asset.requestContentEditingInputWithOptions(options, completionHandler: {(contentEditingInput: PHContentEditingInput?, info: [NSObject : AnyObject]) -> Void in
                    let avAsset = contentEditingInput?.audiovisualAsset

                    let reader = try! AVAssetReader.init(asset: avAsset!)
                    let videoTrack = avAsset?.tracksWithMediaType(AVMediaTypeVideo).first
                    let readerOutputSettings: [String:Int] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
                    let readerOutput = AVAssetReaderTrackOutput(track: videoTrack!, outputSettings: readerOutputSettings)
                    reader.addOutput(readerOutput)
                    reader.startReading()
                    //Create the samples
                    var samples:[CMSampleBuffer] = []
                    var sample: CMSampleBufferRef?
                    sample = readerOutput.copyNextSampleBuffer()
                    while (sample != nil)
                    {
                        autoreleasepool {
                            samples.append(sample!)
                            sample = readerOutput.copyNextSampleBuffer()
                        }
                    }
                    for i in 0...samples.count - 1 {
                        // Get the presentation time for the frame
                        var append_ok:Bool = false
                        autoreleasepool {
                            if  let pixelBufferPool = adaptor.pixelBufferPool {
                                let pixelBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
                                let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(
                                    kCFAllocatorDefault,
                                    pixelBufferPool,
                                    pixelBufferPointer
                                )
                                let frameTime = CMTimeMake(Int64(frameCount), 30)
                                if var buffer = pixelBufferPointer.memory where status == 0 {
                                    buffer = CMSampleBufferGetImageBuffer(samples[i])!
                                    append_ok = adaptor.appendPixelBuffer(buffer, withPresentationTime: frameTime)
                                    pixelBufferPointer.destroy()
                                } else {
                                    NSLog("Error: Failed to allocate pixel buffer from pool")
                                }
                                pixelBufferPointer.dealloc(1)
                                dispatch_group_leave(self.videoProcessingGroup)
                            }
                        }
                    }
                })


        //Finish the session:
        dispatch_group_notify(videoProcessingGroup, dispatch_get_main_queue(), {
            videoWriterInput.markAsFinished()
            videoWriter.finishWritingWithCompletionHandler({
                print("Write Ended")
                // Return writer
                print("Created asset writer for (size.width)x(size.height) video")
            })
        })

通常,您无法将视频资产的所有框架放入iOS设备上的内存,甚至在台式机上:

var samples:[CMSampleBuffer] = []

即使视频长30秒,也不会。例如每秒30帧,一个720p,30秒的视频被解码为BGRA,需要内存的30 * 30 * 1280 * 720 * 4 bytes = 3.2GB。每个帧都是3.5MB!如果您使用的是1080p或更高的帧率,更糟糕的是。

您需要通过框架逐步合并文件,在任何给定时间保持在内存中尽可能少的帧。

但是,对于合并的操作,您不需要自己处理框架。您可以创建一个AVMutableComposition,附加单个AVAsset S,然后使用AVAssetExportSession导出合并的文件。

最新更新