我正在遵循本教程,并尝试使用我用cv2.stereoCalibrate
获得的基本矩阵(Fmat
)在立体图像对上绘制直线。我正在尝试使用我导入的格式。npy代替cv.findFundamentalMat
和cv.FM_RANSAC
。然而,对代码的两次尝试都会产生类似的值错误。
代码如下:
# 1. Detect keypoints and their descriptors
# Based on: https://docs.opencv.org/master/dc/dc3/tutorial_py_matcher.html
# Initiate SIFT detector
sift = cv.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)
# Visualize keypoints
imgSift = cv.drawKeypoints(
img1, kp1, None, flags=cv.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv.imshow("SIFT Keypoints", imgSift)
# Match keypoints in both images
# Based on: https://docs.opencv.org/master/dc/dc3/tutorial_py_matcher.html
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1, des2, k=2)
# Keep good matches: calculate distinctive image features
# Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision 60, 91–110 (2004). https://doi.org/10.1023/B:VISI.0000029664.99615.94
# https://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf
matchesMask = [[0, 0] for i in range(len(matches))]
good = []
pts1 = []
pts2 = []
for i, (m, n) in enumerate(matches):
if m.distance < 0.7*n.distance:
# Keep this keypoint pair
matchesMask[i] = [1, 0]
good.append(m)
pts2.append(kp2[m.trainIdx].pt)
pts1.append(kp1[m.queryIdx].pt)
# Draw the keypoint matches between both pictures
# Still based on: https://docs.opencv.org/master/dc/dc3/tutorial_py_matcher.html
draw_params = dict(matchColor=(0, 255, 0),
singlePointColor=(255, 0, 0),
matchesMask=matchesMask[300:500],
flags=cv.DrawMatchesFlags_DEFAULT)
keypoint_matches = cv.drawMatchesKnn(
img1, kp1, img2, kp2, matches[300:500], None, **draw_params)
cv.imshow("Keypoint matches", keypoint_matches)
# ------------------------------------------------------------
# STEREO RECTIFICATION
Fmat = np.load('Fmat.npy') # Load fundamental matrix
# Calculate the fundamental matrix for the cameras
# https://docs.opencv.org/master/da/de9/tutorial_py_epipolar_geometry.html
pts1 = np.int32(pts1)
pts2 = np.int32(pts2)
fundamental_matrix, inliers = cv.findFundamentalMat(pts1, pts2, cv.FM_RANSAC) #Fmat
# We select only inlier points
pts1 = pts1[inliers.ravel() == 1]
pts2 = pts2[inliers.ravel() == 1]
# Visualize epilines
# Adapted from: https://docs.opencv.org/master/da/de9/tutorial_py_epipolar_geometry.html
def drawlines(img1src, img2src, lines, pts1src, pts2src):
''' img1 - image on which we draw the epilines for the points in img2
lines - corresponding epilines '''
r, c = img1src.shape
img1color = cv.cvtColor(img1src, cv.COLOR_GRAY2BGR)
img2color = cv.cvtColor(img2src, cv.COLOR_GRAY2BGR)
# Edit: use the same random seed so that two images are comparable!
np.random.seed(0)
for r, pt1, pt2 in zip(lines, pts1src, pts2src):
color = tuple(np.random.randint(0, 255, 3).tolist())
x0, y0 = map(int, [0, -r[2]/r[1]])
x1, y1 = map(int, [c, -(r[2]+r[0]*c)/r[1]])
img1color = cv.line(img1color, (x0, y0), (x1, y1), color, 1)
img1color = cv.circle(img1color, tuple(pt1), 5, color, -1)
img2color = cv.circle(img2color, tuple(pt2), 5, color, -1)
return img1color, img2color
# Find epilines corresponding to points in right image (second image) and
# drawing its lines on left image
lines1 = cv.computeCorrespondEpilines(
pts2.reshape(-1, 1, 2), 2, fundamental_matrix)
lines1 = lines1.reshape(-1, 3)
img5, img6 = drawlines(img1, img2, lines1, pts1, pts2)
# Find epilines corresponding to points in left image (first image) and
# drawing its lines on right image
lines2 = cv.computeCorrespondEpilines(
pts1.reshape(-1, 1, 2), 1, fundamental_matrix)
lines2 = lines2.reshape(-1, 3)
img3, img4 = drawlines(img2, img1, lines2, pts2, pts1)
plt.subplot(121), plt.imshow(img5)
plt.subplot(122), plt.imshow(img3)
plt.suptitle("Epilines in both images")
plt.show()
当我用fundamental_matrix, inliers = cv.findFundamentalMat(pts1, pts2, cv.FM_RANSAC)
行运行上面的代码时,就像教程中提供的那样,返回以下错误:
Traceback (most recent call last):
File "C:Usersxxxstereo-camerafeatureMatching.py", line 103, in <module>
img5, img6 = drawlines(img1, img2, lines1, pts1, pts2)
File "C:Usersxxxstereo-camerafeatureMatching.py", line 83, in drawlines
r, c = img1src.shape
ValueError: too many values to unpack (expected 2)
当我用r, c, *_ = img1src.shape
替换r, c = img1src.shape
行时,我得到以下新错误:
Traceback (most recent call last):
File "C:UsersjoiniOneDriveDocumentscodeDCEstereoVisionObstacleAvoidSystemstereo-camerafeatureMatching.py", line 105, in <module>
img5, img6 = drawlines(img1, img2, lines1, pts1, pts2)
File "C:UsersjoiniOneDriveDocumentscodeDCEstereoVisionObstacleAvoidSystemstereo-camerafeatureMatching.py", line 86, in drawlines
img1color = cv.cvtColor(img1src, cv.COLOR_GRAY2BGR)
cv2.error: OpenCV(4.6.0) d:aopencv-pythonopencv-pythonopencvmodulesimgprocsrccolor.simd_helpers.hpp:92: error: (-2:Unspecified error) in function '__cdecl cv::impl::`a
nonymous-namespace'::CvtHelper<struct cv::impl::`anonymous namespace'::Set<1,-1,-1>,struct cv::impl::A0xf2302844::Set<3,4,-1>,struct cv::impl::A0xf2302844::Set<0,2,5>,2>::CvtH
elper(const class cv::_InputArray &,const class cv::_OutputArray &,int)'
> Invalid number of channels in input image:
> 'VScn::contains(scn)'
> where
> 'scn' is 3
当我运行代码并将行更改为fundamental_matrix, inliers = Fmat
以获取基本矩阵时,我得到以下错误:
Traceback (most recent call last):
File "C:Usersxxxstereo-camerafeatureMatching.py", line 71, in <module>
fundamental_matrix, inliers = Fmat
ValueError: too many values to unpack (expected 2)
图片的形状为(480, 640, 3)
。
在每种情况下出了什么问题,我需要做什么才能使用Fmat生成所需的epline结果。npy文件?
抛出错误的那行
r, c = img1src.shape
最有可能的是,教程代码只测试了灰度图像,只有(行x col)形状。我猜你正在使用一个RGB图像,有一个(行x colx RGB)形状。因此,形状在元组中有第三个值,它不能被解压缩成(r, c)。
尝试用
替换行r, c, *_ = img1src.shape
如果存在第三维度(颜色通道),则忽略。