实现示例缓冲区委托方法 iPhone



这是我的代码

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
   fromConnection:(AVCaptureConnection *)connection
{ 
    // Create a UIImage from the sample buffer data
    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
    dispatch_async(dispatch_get_main_queue(), ^{
      [[self imgView] setImage:image];
    });
}

从示例缓冲区数据创建 UIImage

- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
{
     CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    // Lock the base address of the pixel buffer.
    CVPixelBufferLockBaseAddress(imageBuffer,0);
    // Get the number of bytes per row for the pixel buffer.
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    // Get the pixel buffer width and height.
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    // Create a device-dependent RGB color space.
    static CGColorSpaceRef colorSpace = NULL;
    if (colorSpace == NULL) {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    if (colorSpace == NULL) {
           // Handle the error appropriately.
           return nil;
        }
    }
    // Get the base address of the pixel buffer.
    uint8_t *baseAddress = malloc( bytesPerRow * height );    
    memcpy( baseAddress, CVPixelBufferGetBaseAddress(imageBuffer), bytesPerRow * height );
    // Get the data size for contiguous planes of the pixel buffer.
    size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
    // Create a Quartz direct-access data provider that uses data we supply.
    CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL);
    // Create a bitmap image from data supplied by the data provider.
    CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow,
              colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little, dataProvider, NULL, true, kCGRenderingIntentDefault);
    CGDataProviderRelease(dataProvider);
    // Create and return an image object to represent the Quartz image.
    UIImage *image = [UIImage imageWithCGImage:cgImage];
    CGImageRelease(cgImage);
    free(baseAddress);
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
    return image;
}

我的图像视图中仍然没有图像,任何人都可以帮助我找到错误。

提前谢谢。

这是我工作的代码:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
   fromConnection:(AVCaptureConnection *)connection
{ 
// Create a UIImage from the sample buffer data
UIImage *img = [self imageFromSampleBuffer:sampleBuffer];
dispatch_async(dispatch_get_main_queue(), ^{
    self.imgView.image=img;
});
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0); 
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer); 
size_t height = CVPixelBufferGetHeight(imageBuffer); 

// Get the number of bytes per row for the pixel buffer
u_int8_t *baseAddress = (u_int8_t *)malloc(bytesPerRow*height);
memcpy( baseAddress, CVPixelBufferGetBaseAddress(imageBuffer), bytesPerRow * height     );
// size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
// Create a bitmap graphics context with the sample buffer data
//The context draws into a bitmap which is `width'
//  pixels wide and `height' pixels high. The number of components for each
//      pixel is specified by `space'
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, 
                                             bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst); 
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);

// Free up the context and color space
CGContextRelease(context); 
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0 orientation:UIImageOrientationRight];
free(baseAddress);
// Release the Quartz image
CGImageRelease(quartzImage);

return (image);
}

试试这个:

- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer, 0);
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);
    CGContextRelease(newContext);
    CGColorSpaceRelease(colorSpace);
    return [UIImage imageWithCGImage:newImage];
    CGImageRelease(newImage);
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
} 

使用 GLKView 加快渲染速度;然后,在委托方法中插入以下代码:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);
    // update the video dimensions information
    _currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// If you want to apply filters to your UIImage before you display it...
    CIImage *tempImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];
    CIFilter *filter = [CIFilter filterWithName: @"CIExposureAdjust" keysAndValues:@"inputImage", tempImage, nil];
    [filter setValue:@(self.tag) forKey:@"inputEV"];
    CIFilter *filter2 = [CIFilter filterWithName: @"CILinearToSRGBToneCurve" keysAndValues:@"inputImage", [filter valueForKey: @"outputImage"], nil];
    CIFilter *filter3 = [CIFilter filterWithName: @"CIGammaAdjust" keysAndValues:@"inputImage", [filter2 valueForKey: @"outputImage"], nil];
    [filter3 setValue:@(self.tag) forKey:@"inputPower"];
    CIImage *sourceImage = [filter3 valueForKey: @"outputImage"];
    CGRect sourceExtent = sourceImage.extent;
    CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;
    CGFloat previewAspect = _videoPreviewViewBounds.size.width  / _videoPreviewViewBounds.size.height;
    // we want to maintain the aspect radio of the screen size, so we clip the video image
    CGRect drawRect = sourceExtent;
    if (sourceAspect > previewAspect)
    {
        // use full height of the video image, and center crop the width
        drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
        drawRect.size.width = drawRect.size.height * previewAspect;
    }
    else
    {
        // use full width of the video image, and center crop the height
        drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
        drawRect.size.height = drawRect.size.width / previewAspect;
    }
    [_videoPreviewView bindDrawable];
    if (_eaglContext != [EAGLContext currentContext])
        [EAGLContext setCurrentContext:_eaglContext];
    // clear eagl view to grey
    glClearColor(0.5, 0.5, 0.5, 1.0);
    glClear(GL_COLOR_BUFFER_BIT);
    // set the blend mode to "source over" so that CI will use that
    glEnable(GL_BLEND);
    glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
    [_ciContext drawImage:sourceImage inRect:_videoPreviewViewBounds fromRect:drawRect];
    [_videoPreviewView display];
}

您始终可以轻松地将CIImage转换为UIImage;但是,为什么呢?通过将图像渲染成EAGLContext,您可以获得GPU加速的图像显示。我想不出你为什么需要UIImage,除非这是你知道的唯一方法。

请注意,此代码可以减少到只有几行,但在使用示例缓冲区时,很少有人需要一种正确定向和着色视频文件中帧的方法。如果是这种情况,这涵盖了您需要的一切。(我认为您会发现很难让从示例缓冲区生成的 UIImage 正确定位自己......

尝试使用

UIImage *image = [[UIImage alloc] initWithCGImage:];

而不是

UIImage *image = [UIImage imageWithCGImage:cgImage];

最新更新