流式网络深度相机(英特尔D455)与python



我正在尝试通过网络流式传输深度相机(Intel D455(的深度和RGB视频。

我正在重新使用此处的脚本:https://pyshine.com/Live-streaming-multiple-videos-on-a-webpage/

我的问题是,当我启动脚本和2个线程时,2个端口(9000和9001将显示最后一个线程(,但如果我只启动其中一个,好的端口显示好的视频(当然另一个不起作用…

你知道我在哪里犯了错误吗?(也许在酝酿中?(

感谢

这是代码:

如果有人需要从Intel流式传输深度相机:

import cv2
import pyrealsense2 as rs
import numpy as np
from flask import Flask, render_template, Response


app = Flask('hello')
HTML="""
<html>
<head>
<title>PyShine Live Streaming</title>
</head>
<body>
<center><h1> PyShine Live Streaming Multiple videos </h1></center>
<center><img src="youradresse:port/stream.mjpg" width='360' height='240' autoplay playsinline></center>
</body>
</html>
"""

# Configure depth and color streams
pipeline = rs.pipeline()
config = rs.config()
# Get device product line for setting a supporting resolution
pipeline_wrapper = rs.pipeline_wrapper(pipeline)
pipeline_profile = config.resolve(pipeline_wrapper)
device = pipeline_profile.get_device()
device_product_line = str(device.get_info(rs.camera_info.product_line))
found_rgb = False
for s in device.sensors:
if s.get_info(rs.camera_info.name) == 'RGB Camera':
found_rgb = True
break
if not found_rgb:
print("The demo requires Depth camera with Color sensor")
exit(0)
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
if device_product_line == 'L500':
config.enable_stream(rs.stream.color, 960, 540, rs.format.bgr8, 30)
else:
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)


class ImgCapture():
def __init__(self):
pass

def read(self):
# Wait for a coherent pair of frames: depth and color
self.frames = pipeline.wait_for_frames()
depth_frame = self.frames.get_depth_frame()
color_frame = self.frames.get_color_frame()
# Convert images to numpy arrays
depth_image = np.asanyarray(depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())

# Convert image of 16uint to 2x 8uint

# Apply colormap on depth image (image must be converted to 8-bit per pixel first)
depth_colormap = cv2.convertScaleAbs(depth_image, alpha=0.03)
depth_colormap_dim = depth_colormap.shape
color_colormap_dim = color_image.shape
# If depth and color resolutions are different, resize color image to match depth image for display
if depth_colormap_dim != color_colormap_dim:
color_image = cv2.resize(color_image, dsize=(depth_colormap_dim[1], depth_colormap_dim[0]), interpolation=cv2.INTER_AREA)

return(color_image, depth_colormap)
def isOpened(self):
ret, _, _ = self.rs.get_frame_stream()
return(ret)

class ImgDepth():
def __init__(self, cap):
self.capture =  cap
pass
def read(self):
color_frame, depth_colormap = self.capture.read()
if depth_colormap is not  None:
ret = True
return(ret,depth_colormap)
def isOpened(self):
color_image, depth_colormap = self.capture.read()
if color_image is not  None:
ret = True
return(ret)

class ImgColor():
def __init__(self,cap):
self.capture =  cap
pass
def read(self): 
# capture = ImgCapture()
color_image, depth_colormap = self.capture.read()
if color_image is not  None:
ret = True
return(ret, color_image)
def isOpened(self):
# capture = ImgCapture()
color_image, depth_colormap = self.capture.read()
if color_image is not  None:
ret = True
return(ret)


def gen_frames_depth():  
while True:
success, DEPTH = capture_depth.read()
print(DEPTH)
if not success:
break
else:
_, buffer_DEPTH = cv2.imencode('.jpg', DEPTH)
frame_depth = buffer_DEPTH.tobytes() 
yield (b'--framern'
b'Content-Type:image/jpegrn'
b'Content-Length: ' + f"{len(frame_depth)}".encode() + b'rn'
b'rn' + frame_depth + b'rn')
@app.route('/video_feed_depth')
def video_feed_depth():
return Response(gen_frames_depth(), mimetype='multipart/x-mixed-replace; boundary=--frame')

def gen_frames_color():  
while True:
success, RGB = capture_color.read()
if not success:
break
else:
_, buffer_RGB = cv2.imencode('.jpg', RGB)
frame_RGB = buffer_RGB.tobytes() 
yield (b'--framern'
b'Content-Type:image/jpegrn'
b'Content-Length: ' + f"{len(frame_RGB)}".encode() + b'rn'
b'rn' + frame_RGB + b'rn')
@app.route('/video_feed_color')
def video_feed_color():
return Response(gen_frames_color(), mimetype='multipart/x-mixed-replace; boundary=--frame')
@app.route('/')
def index():
return """
<body>
<div class="container">
<div class="row">
<div class="col-lg-8  offset-lg-2">
<h3 class="mt-5">Live Streaming</h3>
<img src="/video_feed_depth" width="50%">
<img src="/video_feed_color" width="50%">
</div>
</div>
</div>
</body>        
"""
if __name__=='__main__':
# Start streaming
pipeline.start(config)
capture0 = ImgCapture()
capture_depth = ImgDepth(capture0)
capture_color = ImgColor(capture0)
app.run(host="0.0.0.0")

最新更新