删除并重新创建任何连接器后,所有连接器都将进入"失败"状态



我已经启用了SSL的Kafka设置并使用POST请求在kafka-connect上注册消息(如下(。如果连接设置是新的,没有现有连接器,则会正常注册连接器。但是,如果我删除任何连接器,一段时间后所有连接器都会进入失败状态,并显示超时异常(如下(。如果我停止 kafka-connect 并从 kafka 中删除所有与 kafka-connect 相关的元数据主题,然后重新启动它。问题出现了,但我再次注册了所有连接器。问题是 kafka-connect 元数据主题可能没有得到更新,但我无法确定问题并找到解决方案。 这是开机自检请求:

curl -k -v -X POST -H "Accept:application/json"  -H "Content-Type:application/json"  https://kafka-connect.domain.com:9093/connectors/  -d '{
    "name": "TEST-CONNECTOR-TEST1131",
    "config": {
        "connector.class": "io.debezium.connector.mysql.MySqlConnector",
        "database.hostname": "test.domain.com",
        "database.port": "3306",
        "database.user": "debezium",
        "database.password": "test",
        "database.serverTimezone":"America/Los_Angeles",
        "database.server.id": "201908281131",
        "database.server.name": "TEST-CONNECTOR",
        "database.history.kafka.bootstrap.servers": 
"kafka1.domain.com:9094",
        "database.history.kafka.topic": "dbhistory.test_201908281131",
        "include.schema.changes": "true",
        "table.whitelist": "qwerdb.test1",
        "database.history.producer.sasl.mechanism": "PLAIN",
        "database.history.producer.security.protocol": "SASL_SSL",
        "database.history.producer.ssl.key.password": "test",
        "database.history.producer.ssl.keystore.location": 
"/opt/keystore.jks",
        "database.history.producer.ssl.keystore.password": "test",
        "database.history.producer.ssl.truststore.location": 
"/opt/truststore.jks",
        "database.history.producer.ssl.truststore.password": "test"
    }
}'

这是异常跟踪:

"trace": "org.apache.kafka.connect.errors.ConnectException: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadatan
tat io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:273)n
tat io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:47)n
tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:198)n
tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)n
tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)n
tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)n
tat java.util.concurrent.FutureTask.run(FutureTask.java:266)n
tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)n
tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)ntat java.lang.Thread.run(Thread.java:748)n
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadatan"

将生产者设置添加到 kafka connect connect-distributed.properties 文件后,此问题消失了。这些在连接分发属性文件中丢失

相关内容

  • 没有找到相关文章

最新更新