我一直在使用FFmpeg库进行RGB->YUV420转换。已经尝试了sws_scale
功能,但效果不佳。现在,我决定使用色彩空间转换公式单独转换每个像素。所以,下面的代码让我获得了一些帧,并允许我访问每个像素的单独R,G,B值:
// Read frames and save first five frames to disk
i=0;
while((av_read_frame(pFormatCtx, &packet)>=0) && (i<5))
{
// Is this a packet from the video stream?
if(packet.stream_index==videoStreamIdx)
{
/// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Did we get a video frame?
if(frameFinished)
{
i++;
sws_scale(img_convert_ctx, (const uint8_t * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize);
int x, y, R, G, B;
uint8_t *p = pFrameRGB->data[0];
for(y = 0; y < h; y++)
{
for(x = 0; x < w; x++)
{
R = *p++;
G = *p++;
B = *p++;
printf(" %d-%d-%d ",R,G,B);
}
}
SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, i);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}
我在网上读到,要转换RGB->YUV420或反之亦然,应该首先转换为YUV444格式。所以,它像:RGB->YUV444->YUV420。我如何在c++中实现这个?
同样,这里是上面使用的SaveFrame()
函数。我想这也将不得不改变一点,因为YUV420存储数据不同。怎么处理呢?
void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame)
{
FILE *pFile;
char szFilename[32];
int y;
// Open file
sprintf(szFilename, "frame%d.ppm", iFrame);
pFile=fopen(szFilename, "wb");
if(pFile==NULL)
return;
// Write header
fprintf(pFile, "P6n%d %dn255n", width, height);
// Write pixel data
for(y=0; y<height; y++)
fwrite(pFrame->data[0]+y*pFrame->linesize[0], 1, width*3, pFile);
// Close file
fclose(pFile);
}
有人能建议一下吗?多谢! !
void SaveFrameYUV420P(AVFrame *pFrame, int width, int height, int iFrame)
{
FILE *pFile;
char szFilename[32];
int y;
// Open file
sprintf(szFilename, "frame%d.yuv", iFrame);
pFile=fopen(szFilename, "wb");
if(pFile==NULL)
return;
// Write pixel data
fwrite(pFrame->data[0], 1, width*height, pFile);
fwrite(pFrame->data[1], 1, width*height/4, pFile);
fwrite(pFrame->data[2], 1, width*height/4, pFile);
// Close file
fclose(pFile);
}
在Windows上,您可以使用irfanview查看以这种方式保存的帧。打开RAW格式,24bpp格式,提供宽度和高度,并勾选"yuv420"框。