我有两个算法,检测AR标记(ARToolKit和Infi)。我让他们在iPhone上实时检测标记。在这个阶段,我想比较他们的速度和准确性。我想准备一系列的图像,并对其进行识别测试。我可以用infi做到这一点,但我不能用UIImage使用ARToolKit。
ARToolKit使用CVImageBuffer识别标记:
- (Marker *)detectMarkerInImageBuffer:(CVImageBufferRef)imageBuffer {
/*We lock the buffer and get the address of the first pixel*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
unsigned char *baseAddress = (unsigned char *) CVPixelBufferGetBaseAddress(imageBuffer);
tracker->calc(baseAddress);
我如何使用/转换UIImage来实现baseAddress,所以我可以把它发送到tracker->calc ?
使用infi,我首先从缓冲区创建IplImage或将UIImage转换为IplImage,然后在这两种情况下:
Buffer* buffer = new Buffer();
buffer->setBuffer( (unsigned char *) iplImg->imageData, iplImg->width, iplImg->height);
我在这里找到了一个答案:我如何将UIImage数组导出为电影?
这就是我要找的
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
frameSize.height, 8, 4*frameSize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, frameTransform);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}