卡夫卡在池中创建客户与两个雄猫平衡



我有两个带有网络应用程序的雄猫。 我用docker运行kafka和zookeeper服务,我运行tomcats。 在 kafka 控制台中,我看到每秒创建 2 个消费者,并显示以下消息:

kafka_1      | [2019-12-20 16:30:20,725] INFO [GroupCoordinator 1001]: Stabilized group 1001 generation 12902 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:20,730] INFO [GroupCoordinator 1001]: Assignment received from leader for group 1001 for generation 12902 (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:21,059] INFO [GroupCoordinator 1001]: Preparing to rebalance group 1001 in state PreparingRebalance with old generation 12902 (__consumer_offsets-24) (reason: Adding new member consumer-1-5c607368-a22c-44dd-b460-6f33101e3e7a with group instanceid None) (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:21,060] INFO [GroupCoordinator 1001]: Stabilized group 1001 generation 12903 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:21,063] INFO [GroupCoordinator 1001]: Assignment received from leader for group 1001 for generation 12903 (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:21,749] INFO [GroupCoordinator 1001]: Preparing to rebalance group 1001 in state PreparingRebalance with old generation 12903 (__consumer_offsets-24) (reason: Adding new member consumer-1-01c204d3-0e36-487e-ac13-374aaf4d84fd with group instanceid None) (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:21,751] INFO [GroupCoordinator 1001]: Stabilized group 1001 generation 12904 (__consumer_offsets-24) (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:21,754] INFO [GroupCoordinator 1001]: Assignment received from leader for group 1001 for generation 12904 (kafka.coordinator.group.GroupCoordinator)
kafka_1      | [2019-12-20 16:30:22,081] INFO [GroupCoordinator 1001]: Preparing to rebalance group 1001 in state PreparingRebalance with old generation 12904 (__consumer_offsets-24) (reason: Adding new member consumer-1-4993cf30-5924-47db-9c63-2b1008f98924 with group instanceid None) (kafka.coordinator.group.GroupCoordinator)

我使用这个码头工人-compose.yml

version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
KAFKA_CREATE_TOPICS: "clinicaleventmanager:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock

如果我只运行一只雄猫,则不存在此问题。 为什么? 我怎样才能避免它? 谢谢

发生这种情况是因为每当您将新消费者附加到同一主题时,都会发生重新平衡。Topic只是Partitions面前的一层.实际上,当您订阅新消费者时,它将被订阅到分区。Kafka 就是这样设计的,因为顺序很重要,只有当你的使用者不超过分区时,你才能维持秩序(你不能有超过 1 个来自同一分区的使用者消费(。这就是您看到该日志的原因。

已解决! 问题是在kafka.properties中,每个tomcat的属性 group.id 必须不同。

我从属性文件和万智牌中删除了 group.id!

最新更新