使用cv2.从单应性结果求解pnp得到相机姿态



我使用以下代码对图像进行地理参考

与输入

grid    = "for example a utm grid"
img_raw = cv2.imread(filename)
mtx, dist = "intrinsic camera matrix and 
             distortion coefficient from calibration matrix"
src_pts = "camera location of gcp on undistorted image"
dst_pts = "world location of gcp in the grid coordinate"

我纠正了相机失真并应用了单应性

img = cv2.undistort(img_raw, mtx, dist, None, None)
H, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
img_geo = cv2.warpPerspective(img,(grid.shape[0],grid.shape[1]),
                              flags=cv2.INTER_NEAREST,borderValue=0)

那么我想要得到相机的位置。我尝试使用旋转和平移矩阵计算在cv2。solvePnP,例如如下所示。如果我是对的,我需要至少4个共面点的相机和世界坐标。

flag, rvec, tvec = cv2.solvePnP(world, cam, mtx, dist)

如果我是对的,在solvePnP中,相机坐标需要来自原始图像帧,而不是像src_pts中那样未扭曲的帧。

所以我的问题是,我怎么能得到src_pts在原始图像帧的像素位置?或者还有其他的方法可以让我们得到和得到吗?

也许功能projectPoints是您需要的。链接在这里:http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#projectpoints

这是我找到的解决方案

grid    = "for example a utm grid"
img_raw = cv2.imread(filename)
mtx, dist = "intrinsic camera matrix and 
         distortion coefficient from calibration matrix"
src_pts = "camera location of gcp on raw image"
dst_pts = "world location of gcp in the grid coordinate"

注意src_pts现在是原始失真图像中的点

src_pts_undistorted = cv2.undistortPoints(src_pts, K, D, P=K)
img = cv2.undistort(img_raw, mtx, dist, None, None)
H, mask = cv2.findHomography(src_pts_undistorted, dst_pts, cv2.RANSAC,5.0)
img_geo = cv2.warpPerspective(img,(grid.shape[0],grid.shape[1]),
                          flags=cv2.INTER_NEAREST,borderValue=0)

那么我就可以从solvePnP

中得到pose
flag, rvec, tvec = cv2.solvePnP(dst_pts, src_pts, mtx, dist)

最新更新