摄像头实时扫描iOS



我正在开发一个iOS应用程序,我需要做一些对象实时扫描。为此,我需要以每秒3或4帧为例。下面是我创建捕获会话的代码:

// Create an AVCaptureSession
    AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
    captureSession.sessionPreset = AVCaptureSessionPresetHigh;
    // Find a suitable AVCaptureDevice
    AVCaptureDevice *photoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    // Create and add an AVCaptureDeviceInput
    NSError *error = nil;
    AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:photoCaptureDevice error:&error];
    if(videoInput){
        [captureSession addInput:videoInput];
    }
    // Create and add an AVCaptureVideoDataOutput
    AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
    // we want BGRA, both CoreGraphics and OpenGL work well with 'BGRA'
    NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:
                                       [NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
    [videoOutput setVideoSettings:rgbOutputSettings];
    // Configure your output, and start the session
    dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
    [videoOutput setSampleBufferDelegate:self queue:queue];
    if(videoOutput){
        [captureSession addOutput:videoOutput];
    }
    [captureSession startRunning];

    // Setting up the preview layer for the camera
    AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
    previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    previewLayer.frame = cameraViewCanvas.bounds;
    // ADDING FINAL VIEW layer TO THE MAIN VIEW sublayer
    [cameraViewCanvas.layer addSublayer:previewLayer];

和在queue上调用的delegate方法:

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
    if(isCapturing){
        NSLog(@"output");
        CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
        CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
        CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
        UIImage *newFrame = [[UIImage alloc] initWithCIImage:ciImage];
        [self showImage:newFrame];
    }
}

问题是我无法在屏幕上看到图像,没有错误和警告,但图像没有显示。我的问题是-我在正确的道路上,什么需要在我的代码中固定显示屏幕上的图像?

迟到了,但问题可能是由于没有在主线程中设置图像(很可能是在您创建的单独调度队列中调用captureOutput)。

dispatch_async(dispatch_get_main_queue(), ^{
  [self showImage:newFrame];
});

[self performSelectorOnMainThread:@selector(showImage:) newFrame waitUntilDone:YES];

最新更新