OpenCV Python,从命名管道读取视频



我正在尝试实现视频中显示的结果(使用netcat的方法3( https://www.youtube.com/watch?v=sYGdge3T30o

关键是将视频从树莓派流式传输到ubuntu PC,并使用openCV和python对其进行处理。

我使用命令

raspivid -vf -n -w 640 -h 480 -o - -t 0 -b 2000000 | nc 192.168.0.20 5777

将视频流式传输到我的 PC,然后在 PC 上创建了名称管道"fifo"并重定向输出

 nc -l -p 5777 -v > fifo

然后我正在尝试读取管道并在 Python 脚本中显示结果

import cv2
import sys
video_capture = cv2.VideoCapture(r'fifo')
video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 640);
video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 480);
while True:
    # Capture frame-by-frame
    ret, frame = video_capture.read()
    if ret == False:
        pass
    cv2.imshow('Video', frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()

但是我最终会得到一个错误

[mp3 @ 0x18b2940] 缺少此错误的标头由命令生成 video_capture = cv2.VideoCapture(r'fifo')

当我将 PC 上的 netcat 输出重定向到一个文件,然后在 python 中读取它时,视频可以工作,但它的速度大约提高了 10 倍。

我知道问题出在 python 脚本上,因为 nc 传输有效(到文件(,但我找不到任何线索。

如何获得提供的视频上显示的结果(方法3(?

我也想在那个视频中达到同样的结果。最初我尝试了与您类似的方法,但似乎是 cv2。VideoCapture(( 无法从命名管道读取,需要更多的预处理。

FFMPEG是要走的路!您可以按照此链接中的说明安装和编译 ffmpeg:https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu

安装后,您可以像这样更改代码:

import cv2
import subprocess as sp
import numpy
FFMPEG_BIN = "ffmpeg"
command = [ FFMPEG_BIN,
        '-i', 'fifo',             # fifo is the named pipe
        '-pix_fmt', 'bgr24',      # opencv requires bgr24 pixel format.
        '-vcodec', 'rawvideo',
        '-an','-sn',              # we want to disable audio processing (there is no audio)
        '-f', 'image2pipe', '-']    
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8)
while True:
    # Capture frame-by-frame
    raw_image = pipe.stdout.read(640*480*3)
    # transform the byte read into a numpy array
    image =  numpy.fromstring(raw_image, dtype='uint8')
    image = image.reshape((480,640,3))          # Notice how height is specified first and then width
    if image is not None:
        cv2.imshow('Video', image)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
    pipe.stdout.flush()
cv2.destroyAllWindows()

无需更改树莓派侧脚本上的任何其他内容。

这对我来说就像一个魅力。视频延迟可以忽略不计。希望对您有所帮助。

我有一个类似的问题,我正在研究,经过更多的研究,我最终偶然发现了以下内容:

跳到解决方案:https://stackoverflow.com/a/48675107/2355051

我最终改编了这个皮卡相机蟒蛇食谱

在树莓派上:(createStream.py(

import io
import socket
import struct
import time
import picamera
# Connect a client socket to my_server:8000 (change my_server to the
# hostname of your server)
client_socket = socket.socket()
client_socket.connect(('10.0.0.3', 777))
# Make a file-like object out of the connection
connection = client_socket.makefile('wb')
try:
    with picamera.PiCamera() as camera:
        camera.resolution = (1024, 768)
        # Start a preview and let the camera warm up for 2 seconds
        camera.start_preview()
        time.sleep(2)
        # Note the start time and construct a stream to hold image data
        # temporarily (we could write it directly to connection but in this
        # case we want to find out the size of each capture first to keep
        # our protocol simple)
        start = time.time()
        stream = io.BytesIO()
        for foo in camera.capture_continuous(stream, 'jpeg', use_video_port=True):
            # Write the length of the capture to the stream and flush to
            # ensure it actually gets sent
            connection.write(struct.pack('<L', stream.tell()))
            connection.flush()
            # Rewind the stream and send the image data over the wire
            stream.seek(0)
            connection.write(stream.read())
            # Reset the stream for the next capture
            stream.seek(0)
            stream.truncate()
    # Write a length of zero to the stream to signal we're done
    connection.write(struct.pack('<L', 0))
finally:
    connection.close()
    client_socket.close()

在处理流的计算机上:(processStream.py(

import io
import socket
import struct
import cv2
import numpy as np
# Start a socket listening for connections on 0.0.0.0:8000 (0.0.0.0 means
# all interfaces)
server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 777))
server_socket.listen(0)
# Accept a single connection and make a file-like object out of it
connection = server_socket.accept()[0].makefile('rb')
try:
    while True:
        # Read the length of the image as a 32-bit unsigned int. If the
        # length is zero, quit the loop
        image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0]
        if not image_len:
            break
        # Construct a stream to hold the image data and read the image
        # data from the connection
        image_stream = io.BytesIO()
        image_stream.write(connection.read(image_len))
        # Rewind the stream, open it as an image with opencv and do some
        # processing on it
        image_stream.seek(0)
        image = Image.open(image_stream)
        data = np.fromstring(image_stream.getvalue(), dtype=np.uint8)
        imagedisp = cv2.imdecode(data, 1)
        cv2.imshow("Frame",imagedisp)
        cv2.waitKey(1)  #imshow will not output an image if you do not use waitKey
        cv2.destroyAllWindows() #cleanup windows 
finally:
    connection.close()
    server_socket.close()

此解决方案的结果与我在原始问题中引用的视频相似。 较大的分辨率帧会增加源的延迟,但对于我的应用程序而言,这是可以容忍的。

首先你需要运行 processStream.py,然后在树莓派上执行 createStream.py

相关内容

  • 没有找到相关文章

最新更新