Confluent Kafka Connect Docker Container Issue



我正在使用以下 docker 撰写片段:

connect:
image: confluentinc/cp-kafka-connect:latest
hostname: connect
container_name: connect
depends_on:
- zookeeper
- kafka
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'kafka:9092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_PLUGIN_PATH: /usr/share/java
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'

容器似乎启动得很好,但是当我尝试通过连接容器 REST API 添加 HDFS 接收器连接时:

curl -s -X POST -H 'Content-Type: application/json' --data 
@confluent_hdfs.json http://localhost:8083/connectors

其中 confluent_hdfs.json 文件包含:

{
"name": "hdfs-sink",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "1",
"topics": "test",
"hdfs.url": "hdfs://localhost:9000",
"flush.size": "1000",
"name": "hdfs-sink"
}
}

我收到 500 HTTP 响应。检查连接器容器的日志显示:

WARN /connectors (org.eclipse.jetty.server.HttpChannel)
javax.servlet.ServletException: javax.servlet.ServletException:
org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: 
io/confluent/connect/hdfs/HdfsSinkConnectorConfig

通过检查此问题,我看到以下帖子:

https://github.com/confluentinc/kafka-connect-hdfs/issues/273

这表明插件路径是错误的。据我所知,我已经将其正确设置为/usr/share/java,并且我还看到了这篇文章所暗示的正确配置的符号链接。

此外,在执行请求时:

curl http://localhost:8083/connector-plugins

我看到以下响应:

[
{"class":"io.confluent.connect.hdfs.HdfsSinkConnector","type":"sink","version":"4.1.1"},
{"class":"io.confluent.connect.hdfs.tools.SchemaSourceConnector","type":"source","version":"1.1.1-cp1"},
{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink","version":"1.1.1-cp1"},
{"class":"org.apache.kafka.connect.file.FileStreamSourceConnector","type":"source","version":"1.1.1-cp1"}
]

所以我真的不确定我是否错过了撰写文件中的某些内容,或者我在这里缺少其他内容?

多亏了 dawsaw,我完成了您建议的示例,我意识到问题出在我通过将连接器文件夹挂载为卷来安装的连接器插件上。不幸的是,我将连接器安装在连接容器的错误部分,这似乎损害了容器正常运行的能力。

到最后,我的工作是:

connect:
image: confluentinc/cp-kafka-connect:4.1.1
container_name: connect
restart: always
ports:
- "8083:8083"
depends_on:
- zookeeper
- kafka
volumes:
- $PWD/confluentinc-kafka-connect-rabbitmq-1.0.0-preview:/usr/share/java/confluentinc-kafka-connect-rabbitmq-1.0.0-preview
environment:
CONNECT_BOOTSTRAP_SERVERS: "kafka:9092"
CONNECT_REST_ADVERTISED_HOST_NAME: "connect"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: "connect"
CONNECT_CONFIG_STORAGE_TOPIC: connect-config
CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: connect-status
CONNECT_REPLICATION_FACTOR: 1
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_PLUGIN_PATH: "/usr/share/java"

再次感谢您对此的帮助,并对最初创建的糟糕的示例代码段表示歉意。

相关内容

  • 没有找到相关文章

最新更新