将OpenCV PNP与OpenGV PNP进行比较



我正在尝试构建一个测试项目,以将OPENCV SOLVEPNP实现与OpenGV One进行比较。

openCV在此处详细介绍:

https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_calibration_and_3d_reconstruction.html#solvepnp

和OpenGV这里:

https://laurentkneip.github.io/opengv/page_how_to_to_use.html

使用OPENCV示例代码,我在图像中找到棋盘,并构建匹配的3D点。我运行CV PNP,然后设置GV求解器。CV PNP运行良好,并打印值:

//rotation
 -0.003040771263293328, 0.9797142824436152, -0.2003763421317906;
 0.0623096853748876, 0.2001735322445355, 0.977777101438374]
//translation
[-12.06549797067309;
 -9.533070368412945;
 37.6825295047483]

i通过重新注射3D点测试,看起来不错。

但是,GV PNP为所有值打印NAN。我试图遵循示例代码,但是我必须在某个地方犯错。代码是:

    int main(int argc, char **argv) {

        cv::Mat matImg = cv::imread("chess.jpg");
            cv::Size boardSize(8, 6);
            //Construct the chessboard model
            double squareSize = 2.80;
            std::vector<cv::Point3f> objectPoints;
            for (int i = 0; i < boardSize.height; i++) {
                for (int j = 0; j < boardSize.width; j++) {
                    objectPoints.push_back(
                        cv::Point3f(double(j * squareSize), float(i * squareSize), 0));
                }
            }
            cv::Mat rvec, tvec;
            cv::Mat cameraMatrix, distCoeffs;
            cv::FileStorage fs("CalibrationData.xml", cv::FileStorage::READ);
            fs["cameraMatrix"] >> cameraMatrix;
            fs["dist_coeffs"] >> distCoeffs;
                //Found chessboard corners
                std::vector<cv::Point2f> imagePoints;
                bool found = cv::findChessboardCorners(matImg, boardSize, imagePoints, cv::CALIB_CB_FAST_CHECK);
                if (found) {
                    cv::drawChessboardCorners(matImg, boardSize, cv::Mat(imagePoints), found);
                    //SolvePnP
                    cv::solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs, rvec, tvec);
                    drawAxis(matImg, cameraMatrix, distCoeffs, rvec, tvec, squareSize);
                }

                //cv to matrix
                cv::Mat R;
                cv::Rodrigues(rvec, R);
                std::cout << "results from cv:" << R << tvec << std::endl;

//START OPEN GV
                //vars
                bearingVectors_t bearingVectors;
                points_t points;
                rotation_t rotation;

                //add points to the gv type
                for (int i = 0; i < objectPoints.size(); ++i)
                {
                    point_t pnt;
                    pnt.x() = objectPoints[i].x;
                    pnt.y() = objectPoints[i].y;
                    pnt.z() = objectPoints[i].z;
                    points.push_back(pnt);
                }
                /*
                K is the common 3x3 camera matrix that you can  compose with cx, cy, fx, and fy.
                You put the image point into homogeneous form (append a 1),
                multiply it with the inverse of K from the left, which gives you a normalized image point (a spatial direction vector).
                You normalize that to norm 1.
                */
                //to homogeneous
                std::vector<cv::Point3f> imagePointsH;
                convertPointsToHomogeneous(imagePoints, imagePointsH);

                //multiply by K.Inv
                for (int i = 0; i < imagePointsH.size(); i++)
                {
                    cv::Point3f pt = imagePointsH[i];
                    cv::Mat ptMat(3, 1, cameraMatrix.type());
                    ptMat.at<double>(0, 0) = pt.x;
                    ptMat.at<double>(1, 0) = pt.y;
                    ptMat.at<double>(2, 0) = pt.z;
                    cv::Mat dstMat = cameraMatrix.inv() * ptMat;
                    //store as bearing vector       
                    bearingVector_t bvec;
                    bvec.x() = dstMat.at<double>(0, 0);
                    bvec.y() = dstMat.at<double>(1, 0);
                    bvec.z() = dstMat.at<double>(2, 0);
                    bvec.normalize();
                    bearingVectors.push_back(bvec);
                }

                //create a central absolute adapter
                absolute_pose::CentralAbsoluteAdapter adapter(
                    bearingVectors,
                    points,
                    rotation);

                size_t iterations = 50;
                std::cout << "running epnp (all correspondences)" << std::endl;
                transformation_t epnp_transformation;
                for (size_t i = 0; i < iterations; i++)
                    epnp_transformation = absolute_pose::epnp(adapter);
                std::cout << "results from epnp algorithm:" << std::endl;
                std::cout << epnp_transformation << std::endl << std::endl;

        return 0;
    }

我在设置OpenGV PNP求解器时会出错?

几年后,我遇到了同样的问题,并解决了它。要将OpenCV转换为OpenGV轴承向量,您可以做到这一点:

bearingVectors_t bearingVectors;
std::vector<cv::Point2f> dd2;

const int N1 = static_cast<int>(dd2.size());
cv::Mat points1_mat = cv::Mat(dd2).reshape(1);
// first rectify points and construct homogeneous points
// construct homogeneous points
cv::Mat ones_col1 = cv::Mat::ones(N1, 1, CV_32F);
cv::hconcat(points1_mat, ones_col1, points1_mat);
// undistort points
cv::Mat points1_rect = points1_mat * cameraMatrix.inv();
// compute bearings
points2bearings3(points1_rect, &bearingVectors);

使用此功能进行最终转换:

// Convert a set of points to bearing
// points Matrix of size Nx3 with the set of points.
// bearings Vector of bearings.
void points2bearings3(const cv::Mat& points,
    opengv::bearingVectors_t* bearings) {
    double l;
    cv::Vec3f p;
    opengv::bearingVector_t bearing;
    for (int i = 0; i < points.rows; ++i) {
        p = cv::Vec3f(points.row(i));
        l = std::sqrt(p[0] * p[0] + p[1] * p[1] + p[2] * p[2]);
        for (int j = 0; j < 3; ++j) bearing[j] = p[j] / l;
        bearings->push_back(bearing);
    }
}

相关内容

  • 没有找到相关文章

最新更新