我有一个客户端应用程序,该应用程序通过UDP或TCP套接字从服务器接收视频流。
最初,当使用.NET 2.0编写时,代码是使用beginReceive/endReceive和iasyncresult。客户端在其自己的窗口中显示每个视频,并使用其自己的线程与服务器进行通信。但是,由于客户应该长时间启动,并且可能同时存在64个视频流,因此每次调用数据接收回调时,都会分配IASYNCRESULT对象的"内存泄漏"。p>这会导致应用程序最终用完内存,因为GC无法及时处理块的发布。我使用VS 2010性能分析仪对此进行了验证。
因此,我修改了使用SocketAsynceventargs和Recectfromasync(UDP案例)的代码。但是,我仍然看到记忆块的增长:
System.Net.Sockets.Socket.ReceiveFromAsync(class System.Net.Sockets.SocketAsyncEventArgs)
我已经阅读了有关实施代码的所有样本和帖子,但仍然没有解决方案。
这是我的代码的样子:
// class data members
private byte[] m_Buffer = new byte[UInt16.MaxValue];
private SocketAsyncEventArgs m_ReadEventArgs = null;
private IPEndPoint m_EndPoint; // local endpoint from the caller
初始化:
m_Socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
m_Socket.Bind(m_EndPoint);
m_Socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReceiveBuffer, MAX_SOCKET_RECV_BUFFER);
//
// initalize the socket event args structure.
//
m_ReadEventArgs = new SocketAsyncEventArgs();
m_ReadEventArgs.Completed += new EventHandler<SocketAsyncEventArgs>(readEventArgs_Completed);
m_ReadEventArgs.SetBuffer(m_Buffer, 0, m_Buffer.Length);
m_ReadEventArgs.RemoteEndPoint = new IPEndPoint(IPAddress.Any, 0);
m_ReadEventArgs.AcceptSocket = m_Socket;
开始阅读过程:
bool waitForEvent = m_Socket.ReceiveFromAsync(m_ReadEventArgs);
if (!waitForEvent)
{
readEventArgs_Completed(this, m_ReadEventArgs);
}
阅读完成处理程序:
private void readEventArgs_Completed(object sender, SocketAsyncEventArgs e)
{
if (e.BytesTransferred == 0 || e.SocketError != SocketError.Success)
{
//
// we got error on the socket or connection was closed
//
Close();
return;
}
try
{
// try to process a new video frame if enough data was read
base.ProcessPacket(m_Buffer, e.Offset, e.BytesTransferred);
}
catch (Exception ex)
{
// log and error
}
bool willRaiseEvent = m_Socket.ReceiveFromAsync(e);
if (!willRaiseEvent)
{
readEventArgs_Completed(this, e);
}
}
基本上,代码正常,我看到了视频流,但是这种泄漏是一个真正的痛苦。
我错过了什么?
非常感谢!
,而不是递归地调用readEventArgs_Completed
!willRaiseEvent
使用goto
以返回方法的顶部。我注意到当我有类似于您的图案时,我正在慢慢咀嚼堆栈空间。