KAFKA connect EOFException当使用confluent broker和source connect



我正在尝试使用示例源连接器运行连接独立应用程序。(文件源或JDBC连接器)。我得到不断重复的错误信息,如

[2022-11-14 18:33:09,641] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Connection with xxxx.westeurope.azure.confluent.cloud/ (channelId=-1) disconnected (org.apache.kafka.common.network.Selector:606)
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243)
at java.base/java.lang.Thread.run(Thread.java:1589)
[2022-11-14 18:33:09,643] INFO [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient:935)
[2022-11-14 18:33:09,645] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Cancelled in-flight API_VERSIONS request with correlation id 0 due to node -1 being disconnected (elapsed time since creation: 34ms, elapsed time since send: 34ms, request timeout: 30000ms): ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.2.1') (org.apache.kafka.clients.NetworkClient:335)
[2022-11-14 18:33:09,647] WARN [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Bootstrap broker xxxx.westeurope.azure.confluent.cloud:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient:1063)
[2022-11-14 18:33:09,757] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Initialize connection to node xxxx.westeurope.azure.confluent.cloud:9092 (id: -1 rack: null) for sending metadata request (org.apache.kafka.clients.NetworkClient:1160)
[2022-11-14 18:33:09,758] DEBUG [local-file-source|task-0] Resolved host xxxx.westeurope.azure.confluent.cloud as  (org.apache.kafka.clients.ClientUtils:113)
[2022-11-14 18:33:09,758] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Initiating connection to node xxxx.westeurope.azure.confluent.cloud:9092 (id: -1 rack: null) using address xxxx.westeurope.azure.confluent.cloud/ (org.apache.kafka.clients.NetworkClient:989)
[2022-11-14 18:33:09,787] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 (org.apache.kafka.common.network.Selector:531)
[2022-11-14 18:33:09,788] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Completed connection to node -1. Fetching API versions. (org.apache.kafka.clients.NetworkClient:951)
[2022-11-14 18:33:09,789] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Initiating API versions fetch from node -1. (org.apache.kafka.clients.NetworkClient:965)
[2022-11-14 18:33:09,789] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=connector-producer-local-file-source-0, correlationId=1) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.2.1') (org.apache.kafka.clients.NetworkClient:521)
[2022-11-14 18:33:09,817] DEBUG [local-file-source|task-0] [Producer clientId=connector-producer-local-file-source-0] Connection with xxxx.westeurope.azure.confluent.cloud/ (channelId=-1) disconnected 

我可以使用kafka-topics.sh命令创建一个主题,通过控制台生产者向主题写入消息,并通过控制台消费者以及使用带有sink连接器的connect-standalone从主题中读取消息。

如果我在本地运行kafka服务器和zookeeper,一切似乎都很好。

命令行:

bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties

connect-standalone.properties

bootstrap.servers=pkc-pj9zy.westeurope.azure.confluent.cloud:9092
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
ssl.endpoint.identification.algorithm=https
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="name" password="passphrase";
ssl.protocol=TLSv1.2
ssl.enabled.protocols=TLSv1.2
plugin.path=./plugins,./libs
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter

connect-file-source.properties

name=local-file-source
connector.class=FileStreamSource
tasks.max=1
file=test.txt
topic=test_ksql_af_file_source-test
auto.create=true
auto.evolve=true

整理好了

缺少属性

producer.security.protocol=SASL_SSL
producer.sasl.mechanism=PLAIN
producer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="name" password="passphrase";

不幸的是,日志没有给出关于这个

的提示。

相关内容

  • 没有找到相关文章

最新更新