从深度图像中提取三维点的正确方法



我使用以下代码从深度图像中提取3d点,

def retrieve_3d_points(K , depth_image_path):
depth_factor = 1000.0
depth_img = cv2.imread(depth_image_path, cv2.IMREAD_ANYCOLOR | cv2.IMREAD_ANYDEPTH) / depth_factor
row, col = depth_img.shape
pts3d = []
fx = K[0][0]
cx = K[0][2]
fy = K[1][1]
cy = K[1][2]
for i in range(row):
for j in range(col):
depth = depth_img[i][j]
x, y = i, j
if depth > 0.0:
x3D = (x - cx) * depth / fx
y3D = (y - cy) * depth / fy
z3D = depth
pts3d.append([x3D, y3D, z3D])
else:
pts3d.append([-1, -1, -1])
return pts3d

不幸的是,检索点在规模上并不那么准确。我的检索是否正确?

它可能是从相机距离生成的,而不是标准化到特定范围。您必须循环浏览它,才能找到范围:最小值、最大值。

minimum, maximum = 0, 0
for i in range(row):
for j in range(col):
pixel = depth_img[i][j]
if pixel < minimum:  minimum = pixel
elif pixel > maximum:  maximum = pixel
extent = maximum -minimum
normalized = [ [], [] ]  ##  normalize values within range 0-1
for i in range(row):
for j in range(col):
##  subtract minimum from every point, then divide by extent
normalized[i][j] = ( depth_img[i][j] -minimum ) /extent
---
scale = 255  ##  optional, if you expect a different range of values.  0-255
for i in range(row):
for j in range(col):
normalized[i][j] = ( depth_img[i][j] -minimum ) /extent *scale

然后使用期望值在数组上运行retrieve_3d_points()

最新更新