我有两个相应的图像点(2D(,由同一相机可视化,内在矩阵K分别来自不同的相机姿势(R1,t1,R2,t2(。如果我将相应的图像点三角化到3D点,然后将其重新投影回原始相机,则它仅与第一台相机中的原始图像点紧密匹配。有人可以帮助我理解为什么吗?下面是一个显示该问题的最小示例:
import cv2
import numpy as np
# Set up two cameras near each other
K = np.array([
[718.856 , 0. , 607.1928],
[ 0. , 718.856 , 185.2157],
[ 0. , 0. , 1. ],
])
R1 = np.array([
[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]
])
R2 = np.array([
[ 0.99999183 ,-0.00280829 ,-0.00290702],
[ 0.0028008 , 0.99999276, -0.00257697],
[ 0.00291424 , 0.00256881 , 0.99999245]
])
t1 = np.array([[0.], [0.], [0.]])
t2 = np.array([[-0.02182627], [ 0.00733316], [ 0.99973488]])
P1 = np.hstack([R1.T, -R1.T.dot(t1)])
P2 = np.hstack([R2.T, -R2.T.dot(t2)])
P1 = K.dot(P1)
P2 = K.dot(P2)
# Corresponding image points
imagePoint1 = np.array([371.91915894, 221.53485107])
imagePoint2 = np.array([368.26071167, 224.86262512])
# Triangulate
point3D = cv2.triangulatePoints(P1, P2, imagePoint1, imagePoint2).T
point3D = point3D[:, :3] / point3D[:, 3:4]
print(point3D)
# Reproject back into the two cameras
rvec1, _ = cv2.Rodrigues(R1)
rvec2, _ = cv2.Rodrigues(R2)
p1, _ = cv2.projectPoints(point3D, rvec1, t1, K, distCoeffs=None)
p2, _ = cv2.projectPoints(point3D, rvec2, t2, K, distCoeffs=None)
# measure difference between original image point and reporjected image point
reprojection_error1 = np.linalg.norm(imagePoint1 - p1[0, :])
reprojection_error2 = np.linalg.norm(imagePoint2 - p2[0, :])
print(reprojection_error1, reprojection_error2)
第一个相机中的重投影误差总是好的(<1px(,但第二个相机总是很大。
请记住,您是如何将旋转矩阵的转置与平移矢量的负数相结合来构建投影矩阵的。 当你把它放到cv2.projectPoints
时,你必须做同样的事情。
因此,取旋转矩阵的转置并将其放入cv2.Rodrigues
. 最后,将平移向量的负数提供给cv2.projectPoints
:
# Reproject back into the two cameras
rvec1, _ = cv2.Rodrigues(R1.T) # Change
rvec2, _ = cv2.Rodrigues(R2.T) # Change
p1, _ = cv2.projectPoints(point3D, rvec1, -t1, K, distCoeffs=None) # Change
p2, _ = cv2.projectPoints(point3D, rvec2, -t2, K, distCoeffs=None) # Change
这样做我们现在得到:
[[-12.19064 1.8813655 37.24711708]]
0.009565768222768252 0.08597237597736622
可以肯定的是,以下是相关变量:
In [32]: p1
Out[32]: array([[[371.91782052, 221.5253794 ]]])
In [33]: p2
Out[33]: array([[[368.3204979 , 224.92440583]]])
In [34]: imagePoint1
Out[34]: array([371.91915894, 221.53485107])
In [35]: imagePoint2
Out[35]: array([368.26071167, 224.86262512])
我们可以看到前几个有效数字匹配,我们预计精度会略有下降,因为这是对点三角测量到的最小二乘求解。