reprojectImageTo3D() in OpenCV



我一直在尝试使用OpenCV提供的reprojectImageTo3D()函数从视差图中计算点的真实世界坐标,但输出似乎是不正确的。

我有校准参数,并使用

计算Q矩阵

stereoRectify(left_cam_matrix, left_dist_coeffs, right_cam_matrix, right_dist_coeffs, frame_size, stereo_params.R, stereo_params.T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, 0, frame_size, 0, 0);

我相信这第一步是正确的,因为立体帧正在正确地纠正,并且我正在执行的失真去除似乎也不错。视差图是用OpenCV的块匹配算法计算的,看起来也不错。

3D点的计算方法如下:

cv::Mat XYZ(disparity8U.size(),CV_32FC3); reprojectImageTo3D(disparity8U, XYZ, Q, false, CV_32F);

但由于某种原因,它们形成了某种锥体,甚至不接近我所期望的,考虑到视差图。我发现其他人在使用这个功能时有类似的问题,我想知道是否有人有解决方案。

提前感谢!

[编辑]

stereoRectify(left_cam_matrix, left_dist_coeffs, right_cam_matrix, right_dist_coeffs,frame_size, stereo_params.R, stereo_params.T, R1, R2, P1, P2, Q, CALIB_ZERO_DISPARITY, 0, frame_size, 0, 0);
initUndistortRectifyMap(left_cam_matrix, left_dist_coeffs, R1, P1, frame_size,CV_32FC1, left_undist_rect_map_x, left_undist_rect_map_y);
initUndistortRectifyMap(right_cam_matrix, right_dist_coeffs, R2, P2, frame_size, CV_32FC1, right_undist_rect_map_x, right_undist_rect_map_y);
cv::remap(left_frame, left_undist_rect, left_undist_rect_map_x, left_undist_rect_map_y, CV_INTER_CUBIC, BORDER_CONSTANT, 0);
cv::remap(right_frame, right_undist_rect, right_undist_rect_map_x, right_undist_rect_map_y, CV_INTER_CUBIC, BORDER_CONSTANT, 0);
cv::Mat imgDisparity32F = Mat( left_undist_rect.rows, left_undist_rect.cols, CV_32F );  
StereoBM sbm(StereoBM::BASIC_PRESET,80,5);
sbm.state->preFilterSize  = 15;
sbm.state->preFilterCap   = 20;
sbm.state->SADWindowSize  = 11;
sbm.state->minDisparity   = 0;
sbm.state->numberOfDisparities = 80;
sbm.state->textureThreshold = 0;
sbm.state->uniquenessRatio = 8;
sbm.state->speckleWindowSize = 0;
sbm.state->speckleRange = 0;
// Compute disparity
sbm(left_undist_rect, right_undist_rect, imgDisparity32F, CV_32F );
// Compute world coordinates from the disparity image
cv::Mat XYZ(disparity32F.size(),CV_32FC3);
reprojectImageTo3D(disparity32F, XYZ, Q, false, CV_32F);
print_3D_points(disparity32F, XYZ);
[编辑]

添加用于从视差计算3D坐标的代码:

cv::Vec3f *StereoFrame::compute_3D_world_coordinates(int row, int col,
  shared_ptr<StereoParameters> stereo_params_sptr){
 cv::Mat Q_32F;
 stereo_params_sptr->Q_sptr->convertTo(Q_32F,CV_32F);
 cv::Mat_<float> vec(4,1);
 vec(0) = col;
 vec(1) = row;
 vec(2) = this->disparity_sptr->at<float>(row,col);
 // Discard points with 0 disparity    
 if(vec(2)==0) return NULL;
 vec(3)=1;              
 vec = Q_32F*vec;
 vec /= vec(3);
 // Discard points that are too far from the camera, and thus are highly
 // unreliable
 if(abs(vec(0))>10 || abs(vec(1))>10 || abs(vec(2))>10) return NULL;
 cv::Vec3f *point3f = new cv::Vec3f();
 (*point3f)[0] = vec(0);
 (*point3f)[1] = vec(1);
 (*point3f)[2] = vec(2);
    return point3f;
}

你的代码看起来不错。可能是reprojectImageTo3D有问题。尝试将其替换为以下代码(具有相同的角色):

cv::Mat_<cv::Vec3f> XYZ(disparity32F.rows,disparity32F.cols);   // Output point cloud
cv::Mat_<float> vec_tmp(4,1);
for(int y=0; y<disparity32F.rows; ++y) {
    for(int x=0; x<disparity32F.cols; ++x) {
        vec_tmp(0)=x; vec_tmp(1)=y; vec_tmp(2)=disparity32F.at<float>(y,x); vec_tmp(3)=1;
        vec_tmp = Q*vec_tmp;
        vec_tmp /= vec_tmp(3);
        cv::Vec3f &point = XYZ.at<cv::Vec3f>(y,x);
        point[0] = vec_tmp(0);
        point[1] = vec_tmp(1);
        point[2] = vec_tmp(2);
    }
}

我从未使用过reprojectImageTo3D,但是我成功地使用了类似于上面代码片段的代码。

(最初的回答)

正如StereoBM的文档中所解释的那样,如果您请求CV_16S视差图,则必须在使用它们之前将每个视差值除以16。

因此,在使用视差图之前,您应该按如下方式转换视差图:

imgDisparity16S.convertTo( imgDisparity32F, CV_32F, 1./16);

您也可以直接从StereoBM结构请求CV_32F视差映射,在这种情况下,您直接获得真正的视差

相关内容

  • 没有找到相关文章

最新更新