ffmpeg/libx264 C API:从短 MP4 末尾丢弃的帧



在我的C++应用程序中,我拍摄了一系列JPEG图像,使用FreeImage操作它们的数据,然后使用ffmpeg/libx264 C API将位图编码为H264。 输出是一个MP4,以12fps的速度显示一系列22张图像。我的代码改编自 ffmpeg C 源代码附带的"muxing"示例。

我的问题是:无论我如何调整编解码器参数,序列末尾传递给编码器的一定数量的帧都不会出现在最终输出中。 我已经设置了AVCodecContext参数,如下所示:

//set context params
ctx->codec_id = AV_CODEC_ID_H264;
ctx->bit_rate = 4000 * 1000;
ctx->width = _width;
ctx->height = _height;
ost->st->time_base = AVRational{ 1, 12 };
ctx->time_base = ost->st->time_base;
ctx->gop_size = 1;
ctx->pix_fmt = AV_PIX_FMT_YUV420P;

我发现gop_size越高,从视频末尾丢弃的帧就越多。 我还可以从输出中看到,在这个 gop 大小(我基本上指示所有输出帧都是 I 帧(下,只写入了 9 帧。

我不确定为什么会发生这种情况。 我尝试编码重复帧并制作更长的视频。 这导致没有丢帧。 我知道使用 ffmpeg 命令行工具有一个连接命令可以完成我正在尝试做的事情,但我不确定如何使用 C API 实现相同的目标。

这是我从控制台获得的输出:

[libx264 @ 026d81c0] 使用 CPU 功能:MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 [libx264 @ 026d81c0] profile 高,水平 3.1 [libx264 @ 026d81c0] 264 - 核心 152 R2851 BA24899 - H.264/MPEG-4 AVC 编解码器 - 警察左 2003-2017 - http://www.videolan.org/x264.html - 选项: cabac=1 ref=1 deb lock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=0 m e_range=16 chroma_me=1 棚架=1 8x8DCT=1 CQM=0 死区=21,11 fast_pskip=1 CHRO ma_qp_offset=-2 线程数=12 lookahead_threads=2 sliced_threads=0 nr=0 十进制=1 隔行扫描=0 bluray_compat=0 constrained_intra=0 帧=0 权重P=0 keyint=1 ke yint_min=1 scenecut=40 intra_refresh=0 rc=abr mbtree=0 比特率=4000 比特率=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 输出 #0, mp4, 到 '....\images\c411a991-46f6-400c-8bb0-77af3738559a.mp4': 流 #0:0: 视频: h264, yuv420p, 700x700, q=2-31, 4000 kb/s, 12 tbn

[libx264 @ 026d81c0] 帧 I:9 平均 QP:17.83 大小:111058 [libx264 @ 026d81c0] mb I I16..4: 1.9% 47.7% 50.5% [libx264 @ 026d81c0] 最终 速率因子: 19.14 [libx264 @ 026d81c0] 8x8 变换内部:47.7% [libx264 @ 026d81c0] 编码 y,uvDC,uvAC 内部: 98.4% 96.9% 89.5% [libx264 @ 026d81c0] i16 v,h,dc,p: 64% 6% 2% 28% [libx264 @ 026d81c0] i8 V,H,DC,DDL,DDR,VR,HD,VL,HU: 32% 15% 9% 5% 5% 6% 8% 10% 10% [libx264 @ 026d81c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 18% 7% 6% 8% 8% 8% 9% 8% [libx264 @ 026d81c0] i8c dc,h,v,p: 43% 22% 25% 10% [libx264 @ 026d81c0] kb/s:10661.53

代码包括如下:

MP4Writer.h

#ifndef MPEG_WRITER
#define MPEG_WRITER
#include <iostream>
#include <string>
#include <vector>
#include <ImgData.h>
extern "C" {
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <libswresample/swresample.h>
#include <libswscale/swscale.h>
}
typedef struct OutputStream 
{
AVStream *st;
AVCodecContext *enc;
//pts of the next frame that will be generated
int64_t next_pts;
int samples_count;
AVFrame *frame;
AVFrame *tmp_frame;
float t, tincr, tincr2;
struct SwsContext *sws_ctx;
struct SwrContext *swr_ctx;
};
class MP4Writer {
public:
MP4Writer();
void Init();
int16_t SetOutput( const std::string & path );
int16_t AddFrame( uint8_t * imgData );
int16_t Write( std::vector<ImgData> & imgData );
int16_t Finalize();
void SetHeight( const int height ) { _height = _width = height; } //assuming 1:1 aspect ratio
private:
int16_t AddStream( OutputStream * ost, AVFormatContext * formatCtx, AVCodec ** codec, enum AVCodecID codecId );
int16_t OpenVideo( AVFormatContext * formatCtx, AVCodec *codec, OutputStream * ost, AVDictionary * optArg );
static AVFrame * AllocPicture( enum AVPixelFormat pixFmt, int width, int height );
static AVFrame * GetVideoFrame( uint8_t * imgData, OutputStream * ost, const int width, const int height );
static int WriteFrame( AVFormatContext * formatCtx, const AVRational * timeBase, AVStream * stream, AVPacket * packet );
int _width;
int _height;
OutputStream _ost;
AVFormatContext * _formatCtx;
AVDictionary * _dict;
};
#endif //MPEG_WRITER

MP4刻录机.cpp

#include <MP4Writer.h>
#include <algorithm>
MP4Writer::MP4Writer()
{
_width = 0;
_height = 0;
}
void MP4Writer::Init()
{
av_register_all();
}
/**
sets up output stream for the specified path.
note that the output format is deduced automatically from the file extension passed
@param path: output file path
@returns: -1 = output could not be deduced, -2 = invalid codec, -3 = error opening output file,
-4 = error writing header
*/
int16_t MP4Writer::SetOutput( const std::string & path )
{
int error;
AVCodec * codec;
AVOutputFormat * format;
_ost = OutputStream{}; //TODO reset state in a more focused way?
//allocate output media context
avformat_alloc_output_context2( &_formatCtx, NULL, NULL, path.c_str() );
if ( !_formatCtx ) {
std::cout << "could not deduce output format from file extension.  aborting" << std::endl;
return -1;
}
//set format
format = _formatCtx->oformat;
if ( format->video_codec != AV_CODEC_ID_NONE ) {
AddStream( &_ost, _formatCtx, &codec, format->video_codec );
}
else {
std::cout << "there is no video codec set.  aborting" << std::endl;
return -2;
}
OpenVideo( _formatCtx, codec, &_ost, _dict );
av_dump_format( _formatCtx, 0, path.c_str(), 1 );
//open output file
if ( !( format->flags & AVFMT_NOFILE )) {
error = avio_open( &_formatCtx->pb, path.c_str(), AVIO_FLAG_WRITE );
if ( error < 0 ) {
std::cout << "there was an error opening output file " << path << ".  aborting" << std::endl;
return -3;
}
}
//write header
error = avformat_write_header( _formatCtx, &_dict );
if ( error < 0 ) {
std::cout << "an error occurred writing header. aborting" << std::endl;
return -4;
}
return 0;
}
/**
initialize the output stream
@param ost: the output stream
@param formatCtx: the context format
@param codec: the output codec
@param codec: the ffmpeg enumerated id of the codec
@returns: -1 = encoder not found, -2 = stream could not be allocated, -3 = encoding context could not be allocated
*/
int16_t MP4Writer::AddStream( OutputStream * ost, AVFormatContext * formatCtx, AVCodec ** codec, enum AVCodecID codecId )
{
AVCodecContext * ctx; //TODO not sure why this is here, could just set ost->enc directly
int i;
//detect the encoder
*codec = avcodec_find_encoder( codecId );
if ( (*codec) == NULL ) {
std::cout << "could not find encoder.  aborting" << std::endl;
return -1;
}
//allocate stream
ost->st = avformat_new_stream( formatCtx, NULL );
if ( ost->st == NULL ) {
std::cout << "could not allocate stream.  aborting" << std::endl;
return -2;
}
//allocate encoding context
ost->st->id = formatCtx->nb_streams - 1;
ctx = avcodec_alloc_context3( *codec );
if ( ctx == NULL ) {
std::cout << "could not allocate encoding context.  aborting" << std::endl;
return -3;
}
ost->enc = ctx;
//set context params
ctx->codec_id = AV_CODEC_ID_H264;
ctx->bit_rate = 4000 * 1000;
ctx->width = _width;
ctx->height = _height;
ost->st->time_base = AVRational{ 1, 12 };
ctx->time_base = ost->st->time_base;
ctx->gop_size = 1;
ctx->pix_fmt = AV_PIX_FMT_YUV420P;
//if neccesary, set stream headers and formats separately
if ( formatCtx->oformat->flags & AVFMT_GLOBALHEADER ) {
std::cout << "setting stream and headers to be separate" << std::endl;
ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
return 0;
}
/**
open the video for writing
@param formatCtx: the format context
@param codec: output codec
@param ost: output stream
@param optArg: dictionary
@return: -1 = error opening codec, -2 = allocate new frame, -3 = copy stream params
*/
int16_t MP4Writer::OpenVideo( AVFormatContext * formatCtx, AVCodec *codec, OutputStream * ost, AVDictionary * optArg )
{
int error;
AVCodecContext * ctx = ost->enc;
AVDictionary * dict = NULL;
av_dict_copy( &dict, optArg, 0 );
//open codec
error = avcodec_open2( ctx, codec, &dict );
av_dict_free( &dict );
if ( error < 0 ) {
std::cout << "there was an error opening the codec.  aborting" << std::endl;
return -1;
}
//allocate new frame
ost->frame = AllocPicture( ctx->pix_fmt, ctx->width, ctx->height );
if ( ost->frame == NULL ) {
std::cout << "there was an error allocating a new frame.  aborting" << std::endl;
return -2;
}
//copy steam params
error = avcodec_parameters_from_context( ost->st->codecpar, ctx );
if ( error < 0 ) {
std::cout << "could not copy stream parameters.  aborting" << std::endl;
return -3;
}
return 0;
}
/**
allocate a new frame
@param pixFmt: ffmpeg enumerated pixel format
@param width: output width
@param height: output height
@returns: an inititalized frame
*/
AVFrame * MP4Writer::AllocPicture( enum AVPixelFormat pixFmt, int width, int height )
{
AVFrame * picture;
int error;
//allocate the frame
picture = av_frame_alloc();
if ( picture == NULL ) {
std::cout << "there was an error allocating the picture" << std::endl;
return NULL;
}
picture->format = pixFmt;
picture->width = width;
picture->height = height;
//allocate the frame's data buffer
error = av_frame_get_buffer( picture, 32 );
if ( error < 0 ) {
std::cout << "could not allocate frame data" << std::endl;
return NULL;
}
picture->pts = 0;
return picture;
}
/**
convert raw RGB buffer to YUV frame
@return: frame that contains image data
*/
AVFrame * MP4Writer::GetVideoFrame( uint8_t * imgData, OutputStream * ost, const int width, const int height )
{
int error;
AVCodecContext * ctx = ost->enc;
//prepare the frame
error = av_frame_make_writable( ost->frame );
if ( error < 0 ) {
std::cout << "could not make frame writeable" << std::endl;
return NULL;
}
//TODO set this context one time per run, or even better, one time at init
//convert RGB to YUV
struct SwsContext* fooContext = sws_getContext( width, height, AV_PIX_FMT_BGR24, 
width, height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL );
int inLinesize[1] = { 3 * width }; // RGB stride
uint8_t * inData[1] = { imgData };
int sliceHeight = sws_scale( fooContext, inData, inLinesize, 0, height, ost->frame->data, ost->frame->linesize );
sws_freeContext( fooContext );
ost->frame->pts = ost->next_pts++;
//TODO does the frame need to be returned here as it is available at the class level?
return ost->frame;
}
/**
write frame to file
@param formatCtx: the output format context
@param timeBase: the framerate
@param stream: output stream
@param packet: data packet
@returns: see return values for av_interleaved_write_frame
*/
int MP4Writer::WriteFrame( AVFormatContext * formatCtx, const AVRational * timeBase, AVStream * stream, AVPacket * packet )
{
av_packet_rescale_ts( packet, *timeBase, stream->time_base );
packet->stream_index = stream->index;
//write compressed file to media file
return av_interleaved_write_frame( formatCtx, packet );
}
int16_t MP4Writer::Write( std::vector<ImgData> & imgData )
{
int16_t errorCount = 0;
int16_t retVal = 0;
bool countingUp = true;
size_t i = 0;
while ( true ) {
//don't show first frame again when counting back down
if ( !countingUp && i == 0 ) {
break;
}
uint8_t * pixels = imgData[i].GetBits( imgData[i].mp4Input );
AddFrame( pixels );
//handle inc/dec without repeating last frame
if ( countingUp ) {
if ( i == imgData.size() -1 ) {
countingUp = false;
i--;
}
else {
i++;
}
}
else {
i--;
}
}
Finalize();
return 0; //TODO return error code
}
/**
add another frame to output video
@param imgData: the raw image data
@returns -1 = error encoding video frame, -2 = error writing frame
*/
int16_t MP4Writer::AddFrame( uint8_t * imgData )
{
int error;
AVCodecContext * ctx;
AVFrame * frame;
int gotPacket = 0;
AVPacket pkt = { 0 };
ctx = _ost.enc;
av_init_packet( &pkt );
frame = GetVideoFrame( imgData, &_ost, _width, _height );
//encode the image
error = avcodec_encode_video2( ctx, &pkt, frame, &gotPacket );
if ( error < 0 ) {
std::cout << "there was an error encoding the video frame" << std::endl;
return -1;
}
//write the frame.  NOTE: this doesn't kick in until the encoder has received a certain number of frames
if ( gotPacket ) {
error = WriteFrame( _formatCtx, &ctx->time_base, _ost.st, &pkt );
if ( error < 0 ) {
std::cout << "the video frame could not be written" << std::endl;
return -2;
}
}
return 0;
}
/**
finalize output video and cleanup
*/
int16_t MP4Writer::Finalize()
{
av_write_trailer( _formatCtx );
avcodec_free_context( &_ost.enc );
av_frame_free( &_ost.frame);
av_frame_free( &_ost.tmp_frame );
avio_closep( &_formatCtx->pb );
avformat_free_context( _formatCtx );
sws_freeContext( _ost.sws_ctx );
swr_free( &_ost.swr_ctx);
return 0;
}

用法

#include <FreeImage.h>
#include <MP4Writer.h>
#include <vector>
struct ImgData
{
unsigned int width;
unsigned int height;
std::string path;
FIBITMAP * mp4Input;
uint8_t * GetBits( FIBITMAP * bmp ) { return FreeImage_GetBits( bmp ); }
};
int main()
{
std::vector<ImgData> imgDataVec;
//load images and push to imgDataVec
MP4Writer mp4Writer;
mp4Writer.SetHeight( 1200 ); //assumes 1:1 aspect ratio
mp4Writer.Init();
mp4Writer.SetOutput( "test.mp4" );
mp4Writer.Write( imgDataVec );
}

我没有看到该代码中任何地方刷新编解码器。您需要在编写预告片等之前刷新编解码器,以便将不完整的 GOP 和由于任何其他原因延迟的帧从编解码器中强制出来。请参阅 ffmpeg 文档中包含的任何编码示例,了解正确的方法(例如 https://github.com/FFmpeg/FFmpeg/blob/6d7192bcb7bbab17dc194e8dbb56c208bced0a92/doc/examples/encode_video.c#L166(。

最新更新