Kafka Connect进入了一个重新平衡循环



我刚刚部署了我的Kafka Connect(我只使用连接源来 MQTT( 应用程序在包含两个实例的集群上(2 个容器在 2 个 机器(,现在似乎进入了一种再平衡循环,我一开始有一点数据,但没有新数据出现。这是我在日志中得到的。

[2017-08-11 07:27:35,810] INFO Joined group and got assignment: Assignment{error=0, leader='connect-1-592bcc91-9d99-4c54-b707-3f52d0f8af50', leaderUrl='http:// 10.120.233.78:9040/', offset=2, connectorIds=[SourceConnector1], taskIds=[]} (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1009)
[2017-08-11 07:27:35,810] WARN Catching up to assignment's config offset. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:679)
[2017-08-11 07:27:35,810] INFO Current config state offset 1 is behind group assignment 2, reading to end of config log (org.apache.kafka.connect.runtime.distributed.DistributedHerder:723)
[2017-08-11 07:27:36,310] INFO Finished reading to end of log and updated config snapshot, new config log offset: 1 (org.apache.kafka.connect.runtime.distributed.DistributedHerder:727)
[2017-08-11 07:27:36,310] INFO Current config state offset 1 does not match group assignment 2. Forcing rebalance. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:703)
[2017-08-11 07:27:36,311] INFO Rebalance started (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1030)
[2017-08-11 07:27:36,311] INFO Wasn't unable to resume work after last rebalance, can skip stopping connectors and tasks (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1056)
[2017-08-11 07:27:36,311] INFO (Re-)joining group source-connector11234 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:381)
[2017-08-11 07:27:36,315] INFO Successfully joined group source-connector11234 with generation 28 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:349)
[2017-08-11 07:27:36,317] INFO Joined group and got assignment: Assignment{error=0, leader='connect-1-592bcc91-9d99-4c54-b707-3f52d0f8af50', leaderUrl='http:// 10.120.233.78:9040/', offset=2, connectorIds=[SourceConnector1], taskIds=[]} (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1009)
[2017-08-11 07:27:36,317] WARN Catching up to assignment's config offset. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:679)
[2017-08-11 07:27:36,317] INFO Current config state offset 1 is behind group assignment 2, reading to end of config log (org.apache.kafka.connect.runtime.distributed.DistributedHerder:723

我也遇到了类似的问题,在 mesos 集群上运行两个单独的容器 - 最终的解决方案是一个烦人的解决方案,在任何地方都没有记录:

使用奇数个容器!

一些分布式系统依靠其工作人员来选举领导者。如果有两个,他们各自投票给另一个并陷入循环。这似乎也是这里正在发生的事情。

我遇到了类似的问题。在集群上创建第二个(或三个(kafka-connect节点开始产生永恒的重新平衡。就我而言,主题connect-offsets是使用 5 个分区(我的 kafka 默认值(而不是自动 25 个分区创建的。(参考 https://docs.confluent.io/platform/current/connect/references/allconfigs.htmlconnect-offsets(。要分析这是否是你的案例运行,并检查输出应如下所示:

$ bin/kafka-topics.sh --topic connect-configs --bootstrap-server the-url-of-your-kafka:9092 --describe
Topic: connect-offsets  PartitionCount: 25      ReplicationFactor: 3    Configs: cleanup.policy=compact,message.format.version=2.7-IV2
Topic: connect-offsets  Partition: 0    Leader: 1       Replicas: 1,0,2 Isr: 0,2,1
... repeated that line 25 times...

并特别注意PartitionCount参数。

此外,您还可以使用类似的命令查看主题connect-configs主题配置。

相关内容

  • 没有找到相关文章

最新更新