我看不出池化SocketAsyncEventArgs样式如何帮助我减少为许多并发连接提供服务的服务器的内存消耗。
是的,它提供了一种替代MS的Begin/End样式的方法,前面MSDN页面将其描述为需要a System.IAsyncResult object be allocated for each asynchronous socket operation
。
最初的研究让我相信,出于某种原因,它最多只允许我分配少数字节数组,并在数千个并发连接的客户端之间共享它们。
但是,如果我想在数千个客户端连接上等待数据,我必须调用ReceiveAsync
数千次,每次都提供一个不同的字节数组(封装在SocketAsyncEventArgs中),然后这数千个数组将一直停留在那里,直到客户端决定发送时为止,这可能是10秒。
因此,除非我在客户端发送数据的时候调用ReceiveAsync(或者在那之后,依赖于一些网络堆栈缓冲区?)——这是客户端的决定,对服务器来说是不可预测的,否则我运气不好,字节数组会坐在那里,无所事事地等待客户端移动底部。
我希望用一个单字节数组(或者,如果并行化有意义的话,每个侦听线程可能只有一个数组)侦听数千个连接,一旦这些连接中的任何一个发送了一些东西(无论如何都必须先进入某个网络堆栈缓冲区),它就会被复制到那个数组中,我的侦听器就会被调用,并且一旦监听器完成,就可以重用该数组。
Socket.*Async()方法确实不可能做到这一点吗?
在.net的套接字库中,这样的事情可能发生吗?
不可能为多个套接字操作共享同一内存(或者如果共享,则会收到未定义的结果)。
您可以通过一开始只读取1个字节来避免这个问题。当读取完成时,很可能会有更多的数据到来。因此,对于下一次读取,您可以使用更有效的大小,如4KB(或者询问DataAvailable
属性——这大约是该属性的唯一有效用例)。
a) 如果有可用的池实例,则使用该池实例,否则创建一个新实例。
b) 完成后,将实例返回到池中,以便可以重用它。
最终,池大小将增长以容纳所有请求,或者,例如,您可以将池配置为具有最大实例计数,并在存在对实例的请求、已达到最大池大小并且池当前为空时进行阻止。此策略可防止池以不受控制的方式增长。
以下是一个实现的示意图,结合了usr出色的byte[1]
解决方法建议,并展示了如何在不牺牲性能的情况下将有些繁琐的Socket.xxxAsync
方法完全隐藏在SimpleAsyncSocket
中。
使用SimpleAsyncSocket
的简单异步echo服务器可能是这样的。
readonly static Encoding Enc = new UTF8Encoding(false);
SimpleAsyncSocket _simpleSocket;
void StartEchoServer(Socket socket)
{
_simpleSocket = new SimpleAsyncSocket(socket, OnSendCallback,
_receiveBufferPool, OnReceiveCallback);
}
bool OnReceiveCallback(SimpleAsyncSocket socket,
ArraySegment<byte> bytes)
{
var str = Enc.GetString(bytes.Array, bytes.Offset, bytes.Count);
_simpleSocket.SendAsync(new ArraySegment<byte>(Enc.GetBytes(str)));
return false;
}
void OnSendCallback(SimpleAsyncSocket asyncSocket,
ICollection<ArraySegment<byte>> collection, SocketError arg3)
{
var bytes = collection.First();
var str = Enc.GetString(bytes.Array, bytes.Offset, bytes.Count);
}
以下是实现的示意图:
class SimpleAsyncSocket
{
private readonly Socket _socket;
private readonly Pool<byte[]> _receiveBufferPool;
private readonly SocketAsyncEventArgs _recvAsyncEventArgs;
private readonly SocketAsyncEventArgs _sendAsyncEventArgs;
private readonly byte[] _waitForReceiveEventBuffer = new byte[1];
private readonly Queue<ArraySegment<byte>> _sendBuffers = new Queue<ArraySegment<byte>>();
public SimpleAsyncSocket(Socket socket, Action<SimpleAsyncSocket, ICollection<ArraySegment<byte>>, SocketError> sendCallback,
Pool<byte[]> receiveBufferPool, Func<SimpleAsyncSocket, ArraySegment<byte>, bool> receiveCallback)
{
if (socket == null) throw new ArgumentNullException("socket");
if (sendCallback == null) throw new ArgumentNullException("sendCallback");
if (receiveBufferPool == null) throw new ArgumentNullException("receiveBufferPool");
if (receiveCallback == null) throw new ArgumentNullException("receiveCallback");
_socket = socket;
_sendAsyncEventArgs = new SocketAsyncEventArgs();
_sendAsyncEventArgs.UserToken = sendCallback;
_sendAsyncEventArgs.Completed += SendCompleted;
_receiveBufferPool = receiveBufferPool;
_recvAsyncEventArgs = new SocketAsyncEventArgs();
_recvAsyncEventArgs.UserToken = receiveCallback;
_recvAsyncEventArgs.Completed += ReceiveCompleted;
_recvAsyncEventArgs.SetBuffer(_waitForReceiveEventBuffer, 0, 1);
ReceiveAsyncWithoutTheHassle(_recvAsyncEventArgs);
}
public void SendAsync(ArraySegment<byte> buffer)
{
lock (_sendBuffers)
_sendBuffers.Enqueue(buffer);
StartOrContinueSending();
}
private void StartOrContinueSending(bool calledFromCompleted = false)
{
lock (_waitForReceiveEventBuffer) // reuse unrelated object for locking
{
if (!calledFromCompleted && _sendAsyncEventArgs.BufferList != null)
return; // still sending
List<ArraySegment<byte>> buffers = null;
lock (_sendBuffers)
{
if (_sendBuffers.Count > 0)
{
buffers = new List<ArraySegment<byte>>(_sendBuffers);
_sendBuffers.Clear();
}
}
_sendAsyncEventArgs.BufferList = buffers; // nothing left to send
if (buffers == null)
return;
}
if (!_socket.SendAsync(_sendAsyncEventArgs))
// Someone on stackoverflow claimed that invoking the Completed
// handler synchronously might end up blowing the stack, which
// does sound possible. To avoid that guy finding my code and
// downvoting me for it (and maybe just because it's the right
// thing to do), let's leave the call stack via the ThreadPool
ThreadPool.QueueUserWorkItem(state => SendCompleted(this, _sendAsyncEventArgs));
}
private void SendCompleted(object sender, SocketAsyncEventArgs args)
{
switch (args.LastOperation)
{
case SocketAsyncOperation.Send:
{
try
{
var bytesTransferred = args.BytesTransferred;
var sendCallback = (Action<SimpleAsyncSocket, ICollection<ArraySegment<byte>>, SocketError>)args.UserToken;
// for the moment, I believe the following commented-out lock is not
// necessary, but still have to think it through properly
// lock (_waitForReceiveEventBuffer) // reuse unrelated object for locking
{
sendCallback(this, args.BufferList, args.SocketError);
}
StartOrContinueSending(true);
}
catch (Exception e)
{
args.BufferList = null;
// todo: log and disconnect
}
break;
}
case SocketAsyncOperation.None:
break;
default:
throw new Exception("Unsupported operation: " + args.LastOperation);
}
}
private void ReceiveCompleted(object sender, SocketAsyncEventArgs args)
{
switch (args.LastOperation)
{
case SocketAsyncOperation.Receive:
{
var bytesTransferred = args.BytesTransferred;
var buffer = args.Buffer;
if (args.BytesTransferred == 0) // remote end closed connection
{
args.SetBuffer(null, 0, 0);
if (buffer != _waitForReceiveEventBuffer)
_receiveBufferPool.Return(buffer);
// todo: disconnect event
return;
}
if (buffer == _waitForReceiveEventBuffer)
{
if (args.BytesTransferred == 1)
{
// we received one byte, there's probably more!
var biggerBuffer = _receiveBufferPool.Take();
biggerBuffer[0] = _waitForReceiveEventBuffer[0];
args.SetBuffer(biggerBuffer, 1, biggerBuffer.Length - 1);
ReceiveAsyncWithoutTheHassle(args);
}
else
throw new Exception("What the heck");
}
else
{
var callback = (Func<SimpleAsyncSocket, ArraySegment<byte>, bool>)args.UserToken;
bool calleeExpectsMoreDataImmediately = false;
bool continueReceiving = false;
try
{
var count = args.Offset == 1
// we set the first byte manually from _waitForReceiveEventBuffer
? bytesTransferred + 1
: bytesTransferred;
calleeExpectsMoreDataImmediately = callback(this, new ArraySegment<byte>(buffer, 0, count));
continueReceiving = true;
}
catch (Exception e)
{
// todo: log and disconnect
}
finally
{
if (!calleeExpectsMoreDataImmediately)
{
args.SetBuffer(_waitForReceiveEventBuffer, 0, 1);
_receiveBufferPool.Return(buffer);
}
}
if (continueReceiving)
ReceiveAsyncWithoutTheHassle(args);
}
break;
}
case SocketAsyncOperation.None:
break;
default:
throw new Exception("Unsupported operation: " + args.LastOperation);
}
}
private void ReceiveAsyncWithoutTheHassle(SocketAsyncEventArgs args)
{
if (!_socket.ReceiveAsync(args))
// Someone on stackoverflow claimed that invoking the Completed
// handler synchronously might end up blowing the stack, which
// does sound possible. To avoid that guy finding my code and
// downvoting me for it (and maybe just because it's the right
// thing to do), let's leave the call stack via the ThreadPool
ThreadPool.QueueUserWorkItem(state => ReceiveCompleted(this, args));
}
}