使用PiCamera、OpenCV进行录制时提取和分析帧



我正在使用piCamera将视频从我的raspberryPi流式传输到网络套接字,这样我就可以在本地网络中查看它。

我想从头开始制作我自己的运动检测脚本,因此我想从视频流中获得第一个图像(它将是纯背景(,然后与第一个帧后的函数进行比较,以检查是否发生了变化(我单独编写了这些函数(,我在这里并不担心效率。

主要问题:我想从BytesIO对象中的那些帧中获取数据,然后将它们转换为B&W这样我就可以执行操作了。所有这些都是在保持流的同时进行的(事实上,我已经将帧速率降低到每秒4帧,以使它在我的计算机上运行得更快(。

以下代码遇到问题:我发现的问题之一是数字太离谱了。在我的设置中,我的相机的分辨率约为640*480(=307200个长度的numpy数组像素,以B&W表示(,而我在len((中的计算返回的像素不到10万。

def main():
print('Initializing camera')
base_image = io.BytesIO()
image_captured = io.BytesIO()
with picamera.PiCamera() as camera:
camera.resolution = (WIDTH, HEIGHT)
camera.framerate = FRAMERATE
camera.vflip = VFLIP # flips image rightside up, as needed
camera.hflip = HFLIP # flips image left-right, as needed
sleep(1) # camera warm-up time
print('Initializing websockets server on port %d' % WS_PORT)
WebSocketWSGIHandler.http_version = '1.1'
websocket_server = make_server(
'', WS_PORT,
server_class=WSGIServer,
handler_class=WebSocketWSGIRequestHandler,
app=WebSocketWSGIApplication(handler_cls=StreamingWebSocket))
websocket_server.initialize_websockets_manager()
websocket_thread = Thread(target=websocket_server.serve_forever)
print('Initializing HTTP server on port %d' % HTTP_PORT)
http_server = StreamingHttpServer()
http_thread = Thread(target=http_server.serve_forever)
print('Initializing broadcast thread')
output = BroadcastOutput(camera)
broadcast_thread = BroadcastThread(output.converter, websocket_server)
print('Starting recording')
camera.start_recording(output, 'yuv')
try:
print('Starting websockets thread')
websocket_thread.start()
print('Starting HTTP server thread')
http_thread.start()
print('Starting broadcast thread')
broadcast_thread.start()
time.sleep(0.5)
camera.capture(base_image, use_video_port=True, format='jpeg')
base_data = np.asarray(bytearray(base_image.read()), dtype=np.uint64)
base_img_matrix = cv2.imdecode(base_data, cv2.IMREAD_GRAYSCALE)
while True:
camera.wait_recording(1)
#insert here the code for frame analysis
camera.capture(image_captured, use_video_port=True, format='jpeg')
data_next = np.asarray(bytearray(image_captured.read()), dtype=np.uint64)
next_img_matrix = cv2.imdecode(data_next, cv2.IMREAD_GRAYSCALE)
monitor_changes(base_img_matrix, next_img_matrix)
except KeyboardInterrupt:
pass
finally:
print('Stopping recording')
camera.stop_recording()
print('Waiting for broadcast thread to finish')
broadcast_thread.join()
print('Shutting down HTTP server')
http_server.shutdown()
print('Shutting down websockets server')
websocket_server.shutdown()
print('Waiting for HTTP server thread to finish')
http_thread.join()
print('Waiting for websockets thread to finish')
websocket_thread.join()

if __name__ == '__main__':
main()

解决了,基本上问题都在我处理数据和BytesIO文件的方式上。首先,我需要使用无符号的int8作为文件类型来解码id。然后,我切换到np.frombuffer来读取整个文件,因为基本图像不会改变,因此它将始终读取相同的内容,下一个将在while循环中被初始化并消除。此外,我可以在函数中用0替换cv2.IMREAD_GRAYSCALE。

camera.start_recording(output, 'yuv')
base_image = io.BytesIO()
try:
print('Starting websockets thread')
websocket_thread.start()
print('Starting HTTP server thread')
http_thread.start()
print('Starting broadcast thread')
broadcast_thread.start()
time.sleep(0.5)
camera.capture(base_image, use_video_port=True, format='jpeg')
base_data = np.frombuffer(base_image.getvalue(), dtype=np.uint8)
base_img_matrix = cv2.imdecode(base_data, 0)
while True:
camera.wait_recording(0.25)
image_captured = io.BytesIO()
#insert here the code for frame analysis
camera.capture(image_captured, use_video_port=True, format='jpeg')
data_next = np.frombuffer(image_captured.getvalue(), dtype=np.uint8)
next_img_matrix = cv2.imdecode(data_next, cv2.IMREAD_GRAYSCALE)
monitor_changes(base_img_matrix, next_img_matrix)
image_captured.close()

相关内容

  • 没有找到相关文章

最新更新