在iOS上使用openCV的高动态范围成像会产生混乱的输出



我正试图在iOS上使用openCV 3从多次曝光中生成HDR图像,最终将输出为EXR文件。当我尝试创建HDR图像时,我注意到我得到了混乱的输出。我认为创建相机响应是一个错误,于是从头开始,将openCV上的HDR成像教程材料改编为iOS,但它产生了类似的结果。以下C++代码返回一个乱码图像:

cv::Mat mergeToHDR (vector<Mat>& images, vector<float>& times)
{
    imgs = images;
    Mat response;
    //Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
    //calibrate->process(images, response, times);
    Ptr<CalibrateRobertson> calibrate = createCalibrateRobertson();
    calibrate->process(images, response, times);
    // create HDR
    Mat hdr;
    Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
    merge_debevec->process(images, hdr, times, response);
    // create LDR
    Mat ldr;
    Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
    tonemap->process(hdr, ldr);
    // create fusion
    Mat fusion;
    Ptr<MergeMertens> merge_mertens = createMergeMertens();
    merge_mertens->process(images, fusion);
    /*
    Uncomment what kind of tonemapped image or hdr to return
    Returning one of the images in the array produces ungarbled output
    so we know the problem is unlikely with the openCV to UIImage conversion
    */
    //give back one of the images from the image array
    //return images[0];
    //give back one of the hdr images
    return fusion * 255;
    //return ldr * 255;
    //return hdr
}

这就是图像的样子:

不良图像输出

我分析了图像,尝试了各种颜色空间转换,但数据似乎是垃圾。

openCV框架是openCV.org网站上最新编译的3.0.0版本。RC和alpha产生相同的结果,并且当前版本不会构建(适用于iOS或OSX)。我想我的下一步是尝试让框架从头开始编译,或者让这个例子在另一个平台下工作,看看问题是特定于平台的,还是与openCV HDR函数本身有关。但在此之前,我想我会把这个问题放在堆栈溢出上,看看是否有人遇到过同样的问题,或者我是否遗漏了一些显而易见的东西。

我已经上传了示例xcode项目到这里:

https://github.com/artandmath/openCVHDRSwiftExample

让openCV与swift一起工作是在Github 上的用户代工的帮助下

感谢铸造厂为我指明了正确的方向。UIImage+OpenCV类扩展预计每个颜色通道有8位,但HDR函数每个通道有32位(这正是我想要的)。在将图像矩阵转换为UIImage之前,将其转换回每个通道8位以用于显示目的可以修复此问题。

这是生成的图像:

预期结果!

以下是固定功能:

cv::Mat mergeToHDR (vector<Mat>& images, vector<float>& times)
{
    imgs = images;
    Mat response;
    //Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
    //calibrate->process(images, response, times);
    Ptr<CalibrateRobertson> calibrate = createCalibrateRobertson();
    calibrate->process(images, response, times);
    // create HDR
    Mat hdr;
    Ptr<MergeDebevec> merge_debevec = createMergeDebevec();
    merge_debevec->process(images, hdr, times, response);
    // create LDR
    Mat ldr;
    Ptr<TonemapDurand> tonemap = createTonemapDurand(2.2f);
    tonemap->process(hdr, ldr);
    // create fusion
    Mat fusion;
    Ptr<MergeMertens> merge_mertens = createMergeMertens();
    merge_mertens->process(images, fusion);
    /*
     Uncomment what kind of tonemapped image or hdr to return
     Convert back to 8-bits per channel because that is what
     the UIImage+OpenCV class extension is expecting
    */

    // tone mapped
    /*
    Mat ldr8bit;
    ldr = ldr * 255;
    ldr.convertTo(ldr8bit, CV_8U);
    return ldr8bit;
    */
    // fusion
    Mat fusion8bit;
    fusion = fusion * 255;
    fusion.convertTo(fusion8bit, CV_8U);
    return fusion8bit;
    // hdr
    /*
    Mat hdr8bit;
    hdr = hdr * 255;
    hdr.convertTo(hdr8bit, CV_8U);
    return hdr8bit;
    */
}

或者,这里是对OpenCV+UIImage类扩展中initWithCVMat方法的修复,该扩展基于OpenCV.org上iOS部分的一个iOS教程:

http://docs.opencv.org/2.4/doc/tutorials/ios/image_manipulation/image_manipulation.html#opencviosimagemanipulation

当使用浮点数据创建新的CGImageRef时,需要明确告知它需要浮点数据,并且需要反转openCV中图像数据的字节顺序。现在iOS/Quartz有浮动数据!这有点棘手,因为该方法仍然只处理每个通道或字母的8位或32位,并且没有考虑从Mat传递到UIImage的每种图像。

- (id)initWithCVMat:(const cv::Mat&)cvMat
{
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
    CGColorSpaceRef colorSpace;
    size_t elemSize = cvMat.elemSize();
    size_t elemSize1 = cvMat.elemSize1();
    size_t channelCount = elemSize/elemSize1;
    size_t bitsPerChannel = 8 * elemSize1;
    size_t bitsPerPixel = bitsPerChannel * channelCount;
    if (channelCount == 1) {
        colorSpace = CGColorSpaceCreateDeviceGray();
    } else {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }
    // Tell CGIImageRef different bitmap info if handed 32-bit
    uint32_t bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
    if (bitsPerChannel == 32 ){
        bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapFloatComponents | kCGBitmapByteOrder32Little;
    }
    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
    // Creating CGImage from cv::Mat
    CGImageRef imageRef = CGImageCreate(cvMat.cols,                                 //width
                                        cvMat.rows,                                 //height
                                        bitsPerChannel,                             //bits per component
                                        bitsPerPixel,                               //bits per pixel
                                        cvMat.step[0],                              //bytesPerRow
                                        colorSpace,                                 //colorspace
                                        bitmapInfo,                                 // bitmap info
                                        provider,                                   //CGDataProviderRef
                                        NULL,                                       //decode
                                        false,                                      //should interpolate
                                        kCGRenderingIntentDefault                   //intent
                                        );                     
    // Getting UIImage from CGImage
    self = [self initWithCGImage:imageRef];
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpace);
    return self;
}

最新更新