当缓冲区大于1字节时,动态流解压缩会导致伪影



我目前正在为我参与的一个项目测试几个解压库,以动态解压http文件流。我试过两个非常有前途的库,发现它们都出现了一个问题。

这就是我正在做的:

  • video.avi在HTTP服务器test.com/video.zip上压缩为video.zip(约20MB)
  • 从服务器读取流的HttpWebRequest
  • 将HttpWebRequest ResponseStream数据写入MemoryStream
  • 让解压缩库从MemoryStream中读取
  • 在HttpWebRequest下载解压缩文件流时读取该流

整个想法很好,我可以解压缩压缩视频并将其直接流式传输到VLC stdin中,它的渲染也很好。然而,我必须在解压缩库上使用一个字节的读取缓冲区。任何大于一个字节的缓冲区都会导致未压缩的数据流被切断。为了进行测试,我将解压缩后的数据流写入了一个文件,并将其与原始视频进行了比较。avi和一些数据只是被解压缩跳过了。当将这些破碎的数据流式传输到VLC时,会导致大量视频伪影,播放速度也会大大降低。

如果我知道可以读取的内容的大小,我可以相应地调整缓冲区,但没有库会公开这些信息,所以我所能做的就是用一个字节的缓冲区读取数据。也许我的方法不对?或者我可能忽略了什么?

下面是一个示例代码(需要VLC):

ICSharpCode.SharpZLib(http://icsharpcode.github.io/SharpZipLib/)

static void Main(string[] args)
    {
        // Initialise VLC
        Process vlc = new Process()
        {
            StartInfo =
            {
                FileName = @"C:Program FilesVideoLANvlc.exe", // Adjust as required to test the code
                RedirectStandardInput = true,
                UseShellExecute = false,
                Arguments = "-"
            }
        };
        vlc.Start();
        Stream outStream = vlc.StandardInput.BaseStream;
        // Get source stream
        HttpWebRequest stream = (HttpWebRequest)WebRequest.Create("http://codefreak.net/~daniel/apps/stream60s-large.zip");
        Stream compressedVideoStream = stream.GetResponse().GetResponseStream();
        // Create local decompression loop
        MemoryStream compressedLoopback = new MemoryStream();
        ZipInputStream zipStream = new ZipInputStream(compressedLoopback);
        ZipEntry currentEntry = null;
        byte[] videoStreamBuffer = new byte[8129]; // 8kb read buffer
        int read = 0;
        long totalRead = 0;
        while ((read = compressedVideoStream.Read(videoStreamBuffer, 0, videoStreamBuffer.Length)) > 0)
        {
            // Write compressed video stream into compressed loopback without affecting current read position
            long previousPosition = compressedLoopback.Position; // Store current read position
            compressedLoopback.Position = totalRead; // Jump to last write position
            totalRead += read; // Increase last write position by current read size
            compressedLoopback.Write(videoStreamBuffer, 0, read); // Write data into loopback
            compressedLoopback.Position = previousPosition; // Restore reading position
            // If not already, move to first entry
            if (currentEntry == null)
                currentEntry = zipStream.GetNextEntry();
            byte[] outputBuffer = new byte[1]; // Decompression read buffer, this is the bad one!
            int zipRead = 0;
            while ((zipRead = zipStream.Read(outputBuffer, 0, outputBuffer.Length)) > 0)
                outStream.Write(outputBuffer, 0, outputBuffer.Length); // Write directly to VLC stdin
        }
    }

锐化压缩(https://github.com/adamhathcock/sharpcompress)

static void Main(string[] args)
    {
        // Initialise VLC
        Process vlc = new Process()
        {
            StartInfo =
            {
                FileName = @"C:Program FilesVideoLANvlc.exe", // Adjust as required to test the code
                RedirectStandardInput = true,
                UseShellExecute = false,
                Arguments = "-"
            }
        };
        vlc.Start();
        Stream outStream = vlc.StandardInput.BaseStream;
        // Get source stream
        HttpWebRequest stream = (HttpWebRequest)WebRequest.Create("http://codefreak.net/~daniel/apps/stream60s-large.zip");
        Stream compressedVideoStream = stream.GetResponse().GetResponseStream();
        // Create local decompression loop
        MemoryStream compressedLoopback = new MemoryStream();
        ZipReader zipStream = null;
        EntryStream currentEntry = null;
        byte[] videoStreamBuffer = new byte[8129]; // 8kb read buffer
        int read = 0;
        long totalRead = 0;
        while ((read = compressedVideoStream.Read(videoStreamBuffer, 0, videoStreamBuffer.Length)) > 0)
        {
            // Write compressed video stream into compressed loopback without affecting current read position
            long previousPosition = compressedLoopback.Position; // Store current read position
            compressedLoopback.Position = totalRead; // Jump to last write position
            totalRead += read; // Increase last write position by current read size
            compressedLoopback.Write(videoStreamBuffer, 0, read); // Write data into loopback
            compressedLoopback.Position = previousPosition; // Restore reading position
            // Open stream after writing to it because otherwise it will not be able to identify the compression type
            if (zipStream == null)
                zipStream = (ZipReader)ReaderFactory.Open(compressedLoopback); // Cast to ZipReader, as we know the type
            // If not already, move to first entry
            if (currentEntry == null)
            {
                zipStream.MoveToNextEntry();
                currentEntry = zipStream.OpenEntryStream();
            }
            byte[] outputBuffer = new byte[1]; // Decompression read buffer, this is the bad one!
            int zipRead = 0;
            while ((zipRead = currentEntry.Read(outputBuffer, 0, outputBuffer.Length)) > 0)
                outStream.Write(outputBuffer, 0, outputBuffer.Length); // Write directly to VLC stdin
        }
    }

为了测试这段代码,我建议将SharpZipLib的输出缓冲区设置为2字节,将SharpCompress的输出缓冲区时设置为8字节。你会看到伪影,而且视频的播放速度是错误的,搜索时间应该始终与视频中计数的数字对齐。

我还没有找到任何好的解释来解释为什么从解压库中读取的更大的outputBuffer会导致这些问题,也没有找到除了拥有尽可能小的缓冲区之外的解决方法。

所以我的问题是我做错了什么,或者这是否是从流中读取压缩文件时的一般问题?如何在读取正确数据的同时增加outputBuffer?

非常感谢您的帮助!

谨致问候,Gachl

您只需要写入读取的字节数。写入整个缓冲区大小将添加额外的字节(无论以前缓冲区中发生了什么)。zipStream.Read不需要读取您请求的那么多字节。

while ((zipRead = zipStream.Read(outputBuffer, 0, outputBuffer.Length)) > 0)
    outStream.Write(outputBuffer, 0, zipRead); // Write directly to VLC stdin

相关内容

  • 没有找到相关文章

最新更新