奥林巴斯相机套件:应用程序在实时视图上绘制人脸检测帧时退出



我正在为奥林巴斯Air A01开发捕获程序,该程序可以实时更新实时图像上的人脸检测结果。人脸检测结果是从Camera Kit中获取的,但当检测到人脸时,我的应用程序退出。我的程序更新didUpdateLiveView中的实时视图图像并将数据传递给drawFaceFrame:cameraFrame:

奥林巴斯相机套件的版本是1.1.0。

@interface ViewController() <OLYCameraLiveViewDelegate>
//OLYCamera Class
@property (weak, nonatomic) OLYCamera *camera;
//For live view
@property (weak, nonatomic) IBOutlet UIImageView *liveView; 
//8 UIViews for face detection frame
@property (strong, nonatomic) IBOutletCollection(UIView) NSArray *faceViews; 
@end
- (void)camera:(OLYCamera *)camera didUpdateLiveView:(NSData *)data metadata:(NSDictionary *)metadata
{
    //UPDATE LIVE VIEW IMAGE HERE
        CGRect frame = AVMakeRectWithAspectRatioInsideRect(self.liveView.image.size, self.liveView.frame); 
        [self drawFaceFrame:camera.detectedHumanFaces cameraFrame:frame];
    }
}
//Draw face detection frame
- (void)drawFaceFrame:(NSDictionary *)faces cameraFrame:(CGRect)frame
{    
    const CGFloat ratioW = cameraFrame.size.width / self.imageView.image.size.width;
        const CGFloat ratioH = cameraFrame.size.height / self.imageView.image.size.height;
        unsigned int i = 0;
        for ( ; detectedHumanFaces && i < detectedHumanFaces.count ; ++i)
        {
        NSString *key = [NSString stringWithFormat:@"%d", i];
            NSValue *value = detectedHumanFaces[key];
            CGRect rect = [value CGRectValue]; 
            CGRect rectInImage = OLYCameraConvertRectOnViewfinderIntoLiveImage(rect, self.imageView.image); 
        CGRect rectInView = rectInImage;
        rectInView.origin.x *= ratioW; 
        rectInView.origin.y *= ratioH;
        rectInView.origin.x += cameraFrame.origin.x;
        rectInView.origin.y += cameraFrame.origin.y;
        rectInView.size.width *= ratioW;
        rectInView.size.height *= ratioH;
        if (i < self.faceViews.count)
        {
            UIView *faceView = [self.faceViews objectAtIndex:i];
            CALayer *layer = [faceView layer];
            [layer setBorderColor:[UIColor redColor].CGColor];
            faceView.frame = rectInView;
            faceView.hidden = NO;
        }
    }
    // Hide unused frames
    for ( ; i < [self.faceViews count] ; ++i) 
    {
        UIView *faceView = [self.faceViews objectAtIndex:i];
        faceView.hidden = YES;
    }
}

似乎有两个问题:

  1. 你的应用程序绘制每一个人脸检测帧,即使相机没有检测到任何人脸。
  2. Camera Kit可能会在app计算帧坐标和app绘制帧的时间内更新detectedHumanFaces

第一点不是很关键,但也不是很好。我推荐Key-value Observation (KVO)技术,该技术仅在相机检测到人脸时调用。

KVO是解决第二个问题的有效方法。您可以在观察者调用的方法中复制detectedHumanFacesdetectedHumanFacesValueDidChange方法由观察者在下面的代码中调用。

@interface ViewController () <OLYCameraLiveViewDelegate>
//OLYCamera Class
@property (weak, nonatomic) OLYCamera *camera;
//For face detection frames
@property (strong, nonatomic) IBOutletCollection(UIView) NSArray *detectedHumanFaceViews;
@end
/// Called by observer when Camera Kit update detectedHumanFaces property.
- (void)detectedHumanFacesValueDidChange {
    // Save live view image and detected face information
    // to make consistent with the number of faces and their coordinates when app updates face detection frame.
    UIImage *image = self.imageView.image;
    NSDictionary *detectedHumanFaces = camera.detectedHumanFaces;
    if (image == nil || detectedHumanFaces == nil) {
        // Show face detection frame only if a live view image and detected face information are confirmed.
        for (UIView *detectedHumanFaceView in self.detectedHumanFaceViews) {
            detectedHumanFaceView.hidden = YES;
        }
        return;
    }
    for (NSInteger index = 0; index < self.detectedHumanFaceViews.count; index++) {
        // Confirm detected face information corresponding to the view for face detection frame.
        // The camera detects eight faces at the maximum.
        UIView *detectedHumanFaceView = self.detectedHumanFaceViews[index];
        NSString *faceKey = [NSString stringWithFormat:@"%ld", index];
        NSValue *faceValue = detectedHumanFaces[faceKey];
        if (!faceValue) {
            detectedHumanFaceView.hidden = YES;
            continue;
        }
        // Decide coordinates of the face detection frame on the screen  
        CGRect imageRect = AVMakeRectWithAspectRatioInsideRect(image.size, self.imageView.bounds);
        CGFloat xRatio = imageRect.size.width / image.size.width;
        CGFloat yRatio = imageRect.size.height / image.size.height;
        CGRect faceRect = OLYCameraConvertRectOnViewfinderIntoLiveImage([faceValue CGRectValue], image);
        CGFloat x = faceRect.origin.x * xRatio + imageRect.origin.x;
        CGFloat y = faceRect.origin.y * yRatio + imageRect.origin.y;
        CGFloat w = faceRect.size.width * xRatio;
        CGFloat h = faceRect.size.height * yRatio;
        CGRect viewRect = CGRectMake(x, y, w, h);
        // Draw face detection frame.
        detectedHumanFaceView.frame = viewRect;
        detectedHumanFaceView.hidden = NO;
    }
}

最新更新