如何使用 cv265 输出 x2 压缩视频.视频作家



我正在对 45 分钟的 1.2GB 视频进行一些渲染,每个 80,0000 帧大小为 1344x756,视频是 mp4 格式,我正在尝试输出带有 x265 压缩的视频问题是当我使用 cv2 时。VideoWriter 视频 10 分钟的输出大小超过 2GB,这不是我打算最终得到的,所以我在 mac osx 和 ubuntu 18 上尝试了以下内容:

codec = cv2.VideoWriter_fourcc(*'HEVC')
out = cv2.VideoWriter('output.mp4', 'HEVC', fps, (width, height))

我得到的只是运行时警告:

OpenCV: FFMPEG: tag 0x43564548/'HEVC' is not found (format 'mp4 / MP4 (MPEG-4 Part 14)')'

我试图实现的不一定是最高质量的输出,而应该是良好的质量和尽可能小的尺寸。

据我所知,OpenCVVideoWriter还不支持 HEVC 编码。

我建议你使用 FFmpeg 作为子进程,并对渲染的帧进行 PIPE 以stdinffmpeg的输入流。

你可以将 Python 绑定用于 ffmpeg-python 等ffmpeg,或者使用 Python 子进程执行ffmpeg

使用ffmpeg,与cv2.VideoWriter相比,您可以更好地控制视频编码参数(cv2.VideoWriter旨在简化灵活性)。

下面是一个示例代码,该代码呈现 50 帧,将帧流式传输到使用 HEVC 视频编解码器对 MP4 视频文件进行编码的ffmpeg

import cv2
import numpy as np
import subprocess as sp
import shlex
width, height, n_frames, fps = 1344, 756, 50, 25  # 50 frames, resolution 1344x756, and 25 fps
output_filename = 'output.mp4'
# Open ffmpeg application as sub-process
# FFmpeg input PIPE: RAW images in BGR color format
# FFmpeg output MP4 file encoded with HEVC codec.
# Arguments list:
# -y                   Overwrite output file without asking
# -s {width}x{height}  Input resolution width x height (1344x756)
# -pixel_format bgr24  Input frame color format is BGR with 8 bits per color component
# -f rawvideo          Input format: raw video
# -r {fps}             Frame rate: fps (25fps)
# -i pipe:             ffmpeg input is a PIPE
# -vcodec libx265      Video codec: H.265 (HEVC)
# -pix_fmt yuv420p     Output video color space YUV420 (saving space compared to YUV444)
# -crf 24              Constant quality encoding (lower value for higher quality and larger output file).
# {output_filename}    Output file name: output_filename (output.mp4)
process = sp.Popen(shlex.split(f'ffmpeg -y -s {width}x{height} -pixel_format bgr24 -f rawvideo -r {fps} -i pipe: -vcodec libx265 -pix_fmt yuv420p -crf 24 {output_filename}'), stdin=sp.PIPE)
# Build synthetic video frames and write them to ffmpeg input stream.
for i in range(n_frames):
# Build synthetic image for testing ("render" a video frame).
img = np.full((height, width, 3), 60, np.uint8)
cv2.putText(img, str(i+1), (width//2-100*len(str(i+1)), height//2+100), cv2.FONT_HERSHEY_DUPLEX, 10, (255, 30, 30), 20)  # Blue number
# Write raw video frame to input stream of ffmpeg sub-process.
process.stdin.write(img.tobytes())
# Close and flush stdin
process.stdin.close()
# Wait for sub-process to finish
process.wait()
# Terminate the sub-process
process.terminate()  # Note: We don't have to terminate the sub-process (after process.wait(), the sub-process is supposed to be closed).

笔记:

  • 可执行文件ffmpeg必须位于 Python 脚本的执行路径中。

  • 对于 Linux,如果ffmpeg不在执行路径中,您可以使用完整路径:

    process = sp.Popen(shlex.split(f'/usr/bin/ffmpeg -y -s {width}x{height} -pixel_format bgr24 -f rawvideo -r {fps} -i pipe: -vcodec libx265 -pix_fmt yuv420p -crf 24 {output_filename}'), stdin=sp.PIPE)
    

    (假设可执行文件ffmpeg/usr/bin/中)。

  • Python 3
  • 的 f-String 语法需要 Python 3.6 或更高版本。


C++示例:

在 Python 中,有多个 FFmpeg 绑定允许 H.265 视频编码.
在 C++ 中,选项要少得多......

我们可以对C++应用类似的解决方案(使用 FFmpeg 子进程).
为了执行 FFmpeg 子进程并打开stdin管道,我们可以在 Windows 中使用 _popen,在 Linux 中使用 popen。

注意:

  • 我注意到_popen不如CreateProcess可靠,我们需要等待(比如最后一秒钟)才能关闭输出文件.
    我不确定 Linux 中的popen是否存在类似的问题。

C++代码示例:

#include <stdio.h>
#include <chrono>
#include <thread>
#include "opencv2/opencv.hpp"
#include <string>
int main()
{
// 50 frames, resolution 1344x756, and 25 fps
int width = 1344;
int height = 756;
int n_frames = 50;
int fps = 25;
const std::string output_filename = "output.mp4"; //Example for file name with spaces: ""output with spaces.mp4""
//Open ffmpeg application as sub - process
//FFmpeg input PIPE : RAW images in BGR color format
//FFmpeg output MP4 file encoded with HEVC codec (using libx265 encoder).
std::string ffmpeg_cmd = std::string("ffmpeg -y -f rawvideo -r ") + std::to_string(fps) +
" -video_size " + std::to_string(width) + "x" + std::to_string(height) +
" -pixel_format bgr24 -i pipe: -vcodec libx265 -crf 24 -pix_fmt yuv420p " + output_filename;
//Execute FFmpeg as sub-process, open stdin pipe (of FFmpeg sub-process) for writing.
//In Windows we need to use _popen and in Linux popen
#ifdef _MSC_VER
FILE* pipeout = _popen(ffmpeg_cmd.c_str(), "wb");   //Windows (ffmpeg.exe must be in the execution path)
#else
//https://batchloaf.wordpress.com/2017/02/12/a-simple-way-to-read-and-write-audio-and-video-files-in-c-using-ffmpeg-part-2-video/
FILE* pipeout = popen(ffmpeg_cmd.c_str(), "w");     //Linux (assume ffmpeg exist in /usr/bin/ffmpeg (and in path).
#endif
for (int i = 0; i < n_frames; i++)
{
//Build synthetic image for testing ("render" a video frame):
cv::Mat frame = cv::Mat(height, width, CV_8UC3);
frame = cv::Scalar(60, 60, 60); //Fill background with dark gray
cv::putText(frame, std::to_string(i+1), cv::Point(width/2 - 100*(int)(std::to_string(i+1).length()), height/2+100), cv::FONT_HERSHEY_DUPLEX, 10, cv::Scalar(255, 30, 30), 20);  // Draw a blue number
//cv::imshow("frame", frame); cv::waitKey(1); //Show the frame for testing
//Write width*height*3 bytes to stdin pipe of FFmpeg sub-process (assume frame data is continuous in the RAM).
fwrite(frame.data, 1, (size_t)width*height*3, pipeout);
}
//Flush and close input and output pipes
fflush(pipeout);
#ifdef _MSC_VER
_pclose(pipeout);   //Windows
#else
pclose(pipeout);    //Linux
#endif
//It looks like we need to wait one more second at the end. //https://stackoverflow.com/a/62804585/4926757
std::this_thread::sleep_for(std::chrono::milliseconds(1000)); // sleep for 1 second
return 0;
}

相关内容

  • 没有找到相关文章

最新更新