我有一个网格模型,并使用VTK从给定的相机位置(x,y,z)渲染了它的视图。我可以将其保存为RGB图像(640x480),但我也想保存一个深度图,其中每个像素存储来自相机的深度值。
我已经通过以下示例尝试使用渲染窗口给出的Zbuffer
值。问题是Zbufer
只存储在[0,1]范围内的值。相反,我试图创建合成范围图像,在那里我存储每个像素与相机的深度/距离。与Kinect生成的图像类似,我试图从网格模型的特定角度创建一个图像。
编辑-添加一些代码
我当前的代码:
加载网格
string mesh_filename = "mesh.ply";
vtkSmartPointer<vtkPLYReader> mesh_reader = read_mesh_ply(mesh_filename);
vtkSmartPointer<vtkPolyDataMapper> mapper = vtkSmartPointer<vtkPolyDataMapper>::New();
mapper->SetInputConnection(mesh_reader->GetOutputPort());
vtkSmartPointer<vtkActor> actor = vtkSmartPointer<vtkActor>::New();
actor->SetMapper(mapper);
vtkSmartPointer<vtkRenderer> renderer = vtkSmartPointer<vtkRenderer>::New();
vtkSmartPointer<vtkRenderWindow> renderWindow = vtkSmartPointer<vtkRenderWindow>::New();
renderWindow->AddRenderer(renderer);
renderWindow->SetSize(640, 480);
vtkSmartPointer<vtkRenderWindowInteractor> renderWindowInteractor = vtkSmartPointer<vtkRenderWindowInteractor>::New();
renderWindowInteractor->SetRenderWindow(renderWindow);
//Add the actors to the scene
renderer->AddActor(actor);
renderer->SetBackground(1, 1, 1);
创建一个相机并将其放置在某个地方
vtkSmartPointer<vtkCamera> camera = vtkSmartPointer<vtkCamera>::New();
renderer->SetActiveCamera(camera);
camera->SetPosition(0,0,650);
//Render and interact
renderWindow->Render();
从z缓冲区获取结果
double b = renderer->GetZ(320, 240);
在本例中,得出0.999995。由于这些值在[0,1]之间,我不知道如何解释,正如你所看到的,我已经将相机设置为在z轴上相距650个单位,所以我假设这个像素(在渲染的RGB中的对象上)的z距离应该接近650。
这个python片段演示了如何将z缓冲区值转换为实际距离。非线性映射定义如下:
numerator = 2.0 * z_near * z_far
denominator = z_far + z_near - (2.0 * z_buffer_data_numpy - 1.0) * (z_far - z_near)
depth_buffer_data_numpy = numerator / denominator
这里有一个完整的例子:
import vtk
import numpy as np
from vtk.util import numpy_support
import matplotlib.pyplot as plt
vtk_renderer = vtk.vtkRenderer()
vtk_render_window = vtk.vtkRenderWindow()
vtk_render_window.AddRenderer(vtk_renderer)
vtk_render_window_interactor = vtk.vtkRenderWindowInteractor()
vtk_render_window_interactor.SetRenderWindow(vtk_render_window)
vtk_render_window_interactor.Initialize()
source = vtk.vtkCubeSource()
mapper = vtk.vtkPolyDataMapper()
mapper.SetInputConnection(source.GetOutputPort())
actor = vtk.vtkActor()
actor.SetMapper(mapper)
actor.RotateX(60.0)
actor.RotateY(-35.0)
vtk_renderer.AddActor(actor)
vtk_render_window.Render()
active_vtk_camera = vtk_renderer.GetActiveCamera()
z_near, z_far = active_vtk_camera.GetClippingRange()
z_buffer_data = vtk.vtkFloatArray()
width, height = vtk_render_window.GetSize()
vtk_render_window.GetZbufferData(
0, 0, width - 1, height - 1, z_buffer_data)
z_buffer_data_numpy = numpy_support.vtk_to_numpy(z_buffer_data)
z_buffer_data_numpy = np.reshape(z_buffer_data_numpy, (-1, width))
z_buffer_data_numpy = np.flipud(z_buffer_data_numpy) # flipping along the first axis (y)
numerator = 2.0 * z_near * z_far
denominator = z_far + z_near - (2.0 * z_buffer_data_numpy - 1.0) * (z_far - z_near)
depth_buffer_data_numpy = numerator / denominator
non_depth_data_value = np.nan
depth_buffer_data_numpy[z_buffer_data_numpy == 1.0] = non_depth_data_value
print(np.nanmin(depth_buffer_data_numpy))
print(np.nanmax(depth_buffer_data_numpy))
plt.imshow(np.asarray(depth_buffer_data_numpy))
plt.show()
旁注:在我的系统上,有几次imshow
命令没有显示任何内容。重新运行脚本确实解决了这个问题。
来源:
http://web.archive.org
open3d