kafka连接分布式NOT_ENOUGH_REPLICAS错误



我正在设置kafka s3接收器连接器,当我尝试使用connect-standalone.sh 时,一切都正常

现在我切换到connect-distributed.sh,我遇到了以下错误:

WARN [Producer clientId=producer-3] Got error produce response with correlation id 40 on topic-partition connect-configs-0, retrying (2147483611 attempts le
ft). Error: NOT_ENOUGH_REPLICAS (org.apache.kafka.clients.producer.internals.Sender:637)

我试图更改

offset.storage.replication.factor=5
config.storage.replication.factor=5
status.storage.replication.factor=5

这是副本的实际数量,但是这个错误不会消失。

我还试图更改所有值以匹配min.insync.replicas,但它仍然不起作用。

Topic: myTopic    PartitionCount: 12      ReplicationFactor: 5    Configs: compression.type=producer,***min.insync.replicas=2***,segment.bytes=1073741824,max.me
ssage.bytes=2097152,index.interval.bytes=4096,retention.bytes=-1,segment.index.bytes=10485760
Topic: myTopic        Partition: 0    Leader: 43      Replicas: 43,11,9,42,10 Isr: 10,42,9,11,43
Topic: myTopic        Partition: 1    Leader: 11      Replicas: 11,42,10,43,9 Isr: 10,42,9,11,43
Topic: myTopic        Partition: 2    Leader: 9       Replicas: 9,42,10,43,11 Isr: 10,42,9,11,43
Topic: myTopic        Partition: 3    Leader: 42      Replicas: 42,10,43,11,9 Isr: 10,42,9,11,43
Topic: myTopic        Partition: 4    Leader: 10      Replicas: 10,43,11,9,42 Isr: 10,42,9,11,43
Topic: myTopic        Partition: 5    Leader: 43      Replicas: 43,10,11,9,42 Isr: 10,42,9,11,43
Topic: myTopic        Partition: 6    Leader: 11      Replicas: 11,43,9,42,10 Isr: 10,42,9,11,43
Topic: myTopic        Partition: 7    Leader: 9       Replicas: 9,43,11,42,10 Isr: 10,42,9,11,43
Topic: myTopic        Partition: 8    Leader: 42      Replicas: 42,11,9,10,43 Isr: 10,42,9,11,43
Topic: myTopic        Partition: 9    Leader: 10      Replicas: 10,42,43,11,9 Isr: 10,42,9,11,43
Topic: myTopic        Partition: 10   Leader: 43      Replicas: 43,11,9,42,10 Isr: 10,42,9,11,43
Topic: myTopic        Partition: 11   Leader: 11      Replicas: 11,42,10,43,9 Isr: 10,42,9,11,43

root@9c4c4d97dcd6:/opt/bitnami/kafka/bin# grep replication.factor= /plugins/worker.properties
offset.storage.replication.factor=2
config.storage.replication.factor=2
status.storage.replication.factor=2

我有什么配置错误吗?感谢

回复您的答案/问题

运行连接器的剩余部分-standalone.sh

独立模式不使用这3个主题或属性。因此,它永远不会抛出这个错误


尝试更改所有值以匹配min.insync.replicas

对于任何主题,复制因子最好始终大于最小ISR。您可以使其相等,但没有必要严格地修复此错误。

与服务器端上的复制参与者匹配

不清楚这意味着什么。如果主题已经存在,Distributed Connect将直接使用它们,而不是尝试重新创建或验证复制配置是否与实际主题匹配

我建议至少复制因子为3,因为你有2个以上的经纪人

我似乎只需要在kafka中重新创建以下主题:

offset.storage.topic=connect-offsets-distributed
config.storage.topic=connect-configs-distributed
status.storage.topic=connect-status-distributed

请注意,我添加了-distributed后缀。我以前有

offset.storage.topic=connect-offsets
config.storage.topic=connect-configs
status.storage.topic=connect-status

这些是运行连接器时遗留下来的-standalone.sh这些遗留主题不适用于分布式连接器。

此外,它还需要与服务器端的复制参与者相匹配。

Topic: connect-status-distributed       PartitionCount: 5       ReplicationFactor: 2    Configs: compression.type=producer,min.insync.replicas=2,cleanup.policy=compact,segment.bytes=10
73741824,max.message.bytes=2097152,index.interval.bytes=4096,retention.bytes=-1,segment.index.bytes=10485760
Topic: connect-status-distributed       Partition: 0    Leader: 43      Replicas: 43,11 Isr: 43,11
Topic: connect-status-distributed       Partition: 1    Leader: 11      Replicas: 11,42 Isr: 11,42
Topic: connect-status-distributed       Partition: 2    Leader: 9       Replicas: 9,42  Isr: 9,42
Topic: connect-status-distributed       Partition: 3    Leader: 42      Replicas: 42,10 Isr: 42,10
Topic: connect-status-distributed       Partition: 4    Leader: 10      Replicas: 10,43 Isr: 10,43
Topic: connect-configs-distributed      PartitionCount: 1       ReplicationFactor: 2    Configs: compression.type=producer,min.insync.replicas=2,cleanup.policy=compact,segment.bytes=10
73741824,max.message.bytes=2097152,index.interval.bytes=4096,retention.bytes=-1,segment.index.bytes=10485760
Topic: connect-configs-distributed      Partition: 0    Leader: 11      Replicas: 11,43 Isr: 11,43

相关内容

  • 没有找到相关文章

最新更新