协议缓冲区流测试 gRPC 以获取一百万条消息



我正在使用包含一百万个项目的列表测试 gRPC,并通过流发送这百万个项目。

我在客户端上有以下代码:

在我的测试主机上:"本地主机",ipPort = 7777

ManagedChannel comunicationChanel = ManagedChannelBuilder.forAddress(host, ipPort)
                .enableFullStreamDecompression().compressorRegistry(CompressorRegistry.getDefaultInstance())
                .decompressorRegistry(DecompressorRegistry.getDefaultInstance()).usePlaintext(true)
                .maxInboundMessageSize(200888896).build();

ListMessageSRVStub asyncStub = ListMessageSRVGrpc.newStub(comunicationChanel);

List<MessageValue> millionMessages = new ArrayList<MessageValue>();
        for (long i = 0l; i < 1000000; i++) {
            millionMessages.add(MessageValue.newBuilder().build());
        }
long before = System.currentTimeMillis();
        StreamObserver<MessageValue> requestObserver = asyncStub.recievetonm(responseObserverTonMessages);
        long i = 0;
        for (MessageValue messageValue : millionMessages) {
            requestObserver.onNext(messageValue);
            i++;
            if (i % 50000 == 0) {
                LOG.info("Sended: " + i);
            }
        }
        requestObserver.onCompleted();
        long total = System.currentTimeMillis() - before;
        LOG.info("Time = " + total);

但我有这个例外:

Exception in thread "main" io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 1879048487, max: 1894252544)
    at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:640)
    at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:594)
    at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:764)
    at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:740)
    at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244)
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:214)
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:146)
    at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:324)
    at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185)
    at io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:121)
    at io.grpc.netty.NettyWritableBufferAllocator.allocate(NettyWritableBufferAllocator.java:51)
    at io.grpc.internal.MessageFramer.writeKnownLengthUncompressed(MessageFramer.java:226)
    at io.grpc.internal.MessageFramer.writeUncompressed(MessageFramer.java:167)
    at io.grpc.internal.MessageFramer.writePayload(MessageFramer.java:140)
    at io.grpc.internal.AbstractStream.writeMessage(AbstractStream.java:52)
    at io.grpc.internal.ClientCallImpl.sendMessage(ClientCallImpl.java:438)
    at io.grpc.ForwardingClientCall.sendMessage(ForwardingClientCall.java:52)
    at io.grpc.ForwardingClientCall.sendMessage(ForwardingClientCall.java:52)
    at io.grpc.stub.ClientCalls$CallToStreamObserverAdapter.onNext(ClientCalls.java:320)
    at com.oesia.grpgtest.server.TestClient.tonsofMSG(TestClient.java:130)
    at com.oesia.grpgtest.server.TestClient.main(TestClient.java:146)

有什么方法可以解决发送大量数据的问题吗?

你有没有试过写关于慢速接收器的:

public class GracefulWriteHandler extends ChannelInboundHandlerAdapter {
  @Override
  public void channelActive(ChannelHandlerContext ctx) {
    writeIfPossible(ctx.channel());
  }
  @Override
  public void channelWritabilityChanged(ChannelHandlerContext ctx) {
    writeIfPossible(ctx.channel());
  }
  private void writeIfPossible(Channel channel) {
    while(needsToWrite && channel.isWritable()) { 
      channel.writeAndFlush(createMessage());
    }
  }
}

我用这个类解决,我们有自己的观察者:

class OwnClientResponseObserver implements ClientResponseObserver<MessageValue, MessageValue> {
        private ClientCallStreamObserver<MessageValue> requestStream;
        private CountDownLatch countDownLatch;
        Iterator<MessageValue> iterator;
        public OwnClientResponseObserver(CountDownLatch countDownLatch, final Iterator<MessageValue> iterator) {
            this.countDownLatch = countDownLatch;
            this.iterator = iterator;
        }
        @Override
        public void onNext(MessageValue value) {
            LOG.info(value.toString());
        }
        @Override
        public void onError(Throwable t) {
            LOG.log(Level.SEVERE, "An savage error apears", t);
        }
        @Override
        public void onCompleted() {
            LOG.info("Finalized!!!");
        }
        @Override
        public void beforeStart(ClientCallStreamObserver<MessageValue> requestStream) {
            this.requestStream = requestStream;
            this.requestStream.disableAutoInboundFlowControl();
            this.requestStream.setOnReadyHandler(new Runnable() {
                @Override
                public void run() {
                    long i = 1L;
                    while (requestStream.isReady()) {
                        if (iterator.hasNext()) {
                            requestStream.onNext(iterator.next());
                            if (i % 1000 == 0) {
                                LOG.info("Checked in" + i);
                            }
                            i++;
                        } else {
                            requestStream.onCompleted();
                            countDownLatch.countDown();
                        }
                    }
                }
            });
        }

相关内容

  • 没有找到相关文章