在Matlab中透视变换没有计算出正确的场景线



因此,我试图为OpenCV中给出的单应性示例编写等效代码。代码很长,但非常简洁,一开始它计算对象和场景的检测器和描述符(通过网络摄像头)。然后用CCD_ 1对它们进行了比较。然后,它选择最佳匹配,并使用它来计算对象和场景的homography和随后的perspective transform。现在我的问题是perspective transform没有给我一个好的结果。由perspective transform获得的坐标中的On似乎挂在(0,0)坐标周围。我在eclipse中有一个类似的代码在纯OpenCV中运行,从中我看到当我在相机周围移动时,第一个坐标变化并没有发生。还要注意的是,计算出的homography值略有不同。然而,对我来说,代码的逻辑没有问题。但是,矩形区域没有在场景中正确显示。我可以看到在场景中画出不同的线,但它们不符合图像,也不符合图像。也许我需要一双不同的眼睛。谢谢

function hello
    disp('Feature matching demo. Press any key when done.');
    % Set up camera
    camera = cv.VideoCapture;
    pause(3); % Necessary in some environment. See help cv.VideoCapture
    % Set up display window
    window = figure('KeyPressFcn',@(obj,evt)setappdata(obj,'flag',true));
    setappdata(window,'flag',false);
    object = imread('D:/match.jpg');
    %Conversion from color to gray
    object = cv.cvtColor(object,'RGB2GRAY');
    %Declaring detector and extractor
    detector = cv.FeatureDetector('SURF');
    extractor = cv.DescriptorExtractor('SURF');
    %Calculating object keypoints
    objKeypoints = detector.detect(object);
    %Calculating object descriptors
    objDescriptors = extractor.compute(object,objKeypoints);
    % Start main loop
    while true
        % Grab and preprocess an image
        im = camera.read;
        %im = cv.resize(im,1);
        scene = cv.cvtColor(im,'RGB2GRAY');
        sceneKeypoints = detector.detect(scene);
        %Checking for empty keypoints
        if isempty(sceneKeypoints) 
            continue
        end;
        sceneDescriptors = extractor.compute(scene,sceneKeypoints);
        matcher = cv.DescriptorMatcher('BruteForce');
        matches = matcher.match(objDescriptors,sceneDescriptors);
        objDescriptRow = size(objDescriptors,1);
        dist_arr = zeros(1,objDescriptRow);

        for i=1:objDescriptRow
            dist_arr(i) = matches(i).distance;
        end;

        min_dist = min(dist_arr);
        N = 10000;    
        good_matches = repmat(struct('distance',0,'imgIdx',0,'queryIdx',0,'trainIdx',0), N, 1 );
        goodmatchesSize = 0;
        for i=1:objDescriptRow
            if matches(i).distance < 3 * min_dist
                good_matches(i).distance = matches(i).distance;
                good_matches(i).imgIdx = matches(i).imgIdx;
                good_matches(i).queryIdx = matches(i).queryIdx;
                good_matches(i).trainIdx = matches(i).trainIdx;
                %Recording the number of good matches
                goodmatchesSize = goodmatchesSize +1;
            end
        end
        im_matches = cv.drawMatches(object, objKeypoints, scene, sceneKeypoints,good_matches);
        objPoints = [];
        scnPoints = [];

        %Finding the good matches
        for i=1:goodmatchesSize
            qryIdx = good_matches(i).queryIdx;
            trnIdx = good_matches(i).trainIdx;
            if qryIdx == 0 
                continue 
            end;
            if trnIdx == 0
                continue
            end;
            first_point = objKeypoints(qryIdx).pt;
            second_point = sceneKeypoints(trnIdx).pt;
            objPoints(i,:)= (first_point);
            scnPoints(i,:) = (second_point);
        end
        %Error checking     
        if length(scnPoints) <=4
            continue
        end;
        if length(scnPoints)~= length(objPoints)
            continue
        end;

        % Finding homography of arrays of two sets of points 
        H = cv.findHomography(objPoints,scnPoints);

        objectCorners = [];
        sceneCorners =[];

        objectCorners(1,1) = 0.1;
        objectCorners(1,2) = 0.1;
        objectCorners(2,1) = size(object,2);
        objectCorners(2,2) = 0.1;
        objectCorners(3,1) = size(object,2);
        objectCorners(3,2) = size(object,1);
        objectCorners(4,1) = 0.1;
        objectCorners(4,2) = size(object,1);
        %Transposing the object corners for perpective transform to work
        newObj = shiftdim(objectCorners,-1);
        %Calculating the perspective tranform
        foo =cv.perspectiveTransform(newObj,H);
        sceneCorners = shiftdim(foo,1);
        offset = [];
        offset(1,1) = size(object,2);
        offset(1,2)= 0;

        outimg = cv.line(im_matches,sceneCorners(1,:)+offset,sceneCorners(2,:)+offset);
        outimg = cv.line(outimg,sceneCorners(2,:)+offset,sceneCorners(3,:)+offset);
        outimg = cv.line(outimg,sceneCorners(3,:)+offset,sceneCorners(4,:)+offset);
        outimg = cv.line(outimg,sceneCorners(4,:)+offset,sceneCorners(1,:)+offset);
        imshow(outimg);

     % Terminate if any user input
        flag = getappdata(window,'flag');
        if isempty(flag)||flag, break; end
        pause(0.000000001);
    end
% Close
    close(window);
end

第一个显而易见的问题:

你怎么知道比赛很精彩?你有没有把它们画在图像上进行验证?当你把火柴传给试衣程序时,你确定你订购的火柴是正确的吗?

你注意到,你得到的单应性系数"略有"不同,但它们的绝对变化并没有多大意义,因为单应性只是按比例定义的。重要的是图像坐标中的重投影误差。

你需要一个完整的单应性吗?对于该应用,仿射变换甚至相似性变换(dx、dy、比例和旋转)可能就足够了。在存在噪波的情况下,更受约束的变换会更好地工作。

最新更新