Kafka 通过 Hyperledger Fabric Orderer 连接给出无效的接收大小



我正在为 EKS 上的 Hyperledger Fabric 设置一个新集群。该集群有 4 个 kafka 节点、3 个动物园管理员节点、4 个对等节点、3 个排序器、1 个 CA。所有的容器都是单独启动的,kafka/zookeeper 后端也很稳定。我可以通过SSH连接到任何kafka/zookeeper并检查与任何其他节点的连接,创建主题,发布消息等。所有订购者都可以通过Telnet访问kafka。

当我尝试创建通道时,我从排序器收到以下错误:

2019-04-25 13:34:17.660 UTC [orderer.common.broadcast] ProcessMessage -> WARN 025 [channel: channel1] Rejecting broadcast of message from 192.168.94.15:53598 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
2019-04-25 13:34:17.660 UTC [comm.grpc.server] 1 -> INFO 026 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=192.168.94.15:53598 grpc.code=OK grpc.call_duration=14.805833ms
2019-04-25 13:34:17.661 UTC [common.deliver] Handle -> WARN 027 Error reading from 192.168.94.15:53596: rpc error: code = Canceled desc = context canceled
2019-04-25 13:34:17.661 UTC [comm.grpc.server] 1 -> INFO 028 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=192.168.94.15:53596 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=24.987468ms

Kafka 领导者报告以下错误:

[2019-04-25 14:07:09,453] WARN [SocketServer brokerId=2] Unexpected error from /192.168.89.200; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 369295617 larger than 104857600)
        at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:132)
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)
        at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:231)
        at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:192)
        at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:528)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:469)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:398)
        at kafka.network.Processor.poll(SocketServer.scala:535)
        at kafka.network.Processor.run(SocketServer.scala:452)
        at java.lang.Thread.run(Thread.java:748)
[2019-04-25 14:13:53,917] INFO [GroupMetadataManager brokerId=2] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)

该错误表示您收到的邮件大于允许的最大大小,默认为 ~100MB。尝试在server.properties文件中增加以下属性,以便它可以容纳更大的接收(在本例中至少为 369295617 字节(:

# Set to 500MB
socket.request.max.bytes=500000000

,然后重新启动 Kafka 集群。

如果这对您不起作用,那么我猜您正在尝试连接到非SSL侦听器。因此,您必须验证代理的 SSL 侦听器端口是否9092(如果您未使用默认端口,则验证相应的端口(。以下应该可以解决问题:

listeners=SSL://:9092
advertised.listeners=SSL://:9092
inter.broker.listener.name=SSL

相关内容

最新更新