我尝试使用Kafka连接器读取日志事务,使用连接器Kafka connect cdc mssql:1.0.0-预览
我正在使用Confluent CLI,并且我已经修改了配置文件:
etc/schema-registry/connect-avro-standalone.properties
我补充道:
plugin.path=/usr/share/java,/tmp/Softwares/confluent-hub-client-latest/share/confluent-hub-components/confluentinc-kafka-connect-cdc-mssql/lib
我有这个问题:当我运行命令时$convergent load-d
$ confluent load mssqlCDC -d /tmp/Softwares/confluent-hub-client-latest/share/confluent-hub-components/confluentinc-kafka-connect-cdc-mssql/etc/mssqlsource.properties
我收到这个错误消息:
(23) Failed writing body
我的存档mssqlsource.properties:
name=mssqlsource
tasks.max=2
connector.class=io.confluent.connect.cdc.mssql.MsSqlSourceConnector
initial.database=(MYDATABASE)
server.name=(MYSERVER)
server.port=(PORT)
username=(MYUSER)
password=(MYPASS)
change.tracking.tables=(MYTABLE)
confluent load
命令实际上是在后台运行curl
命令,您自己运行它可能会更容易,并得到正确的错误,将属性文件转换为mssqlsource.json
文件:
{
"name": "mssqlsource",
"config": {
"tasks.max": 2,
"connector.class": "io.confluent.connect.cdc.mssql.MsSqlSourceConnector",
"initial.database": "(MYDATABASE)",
"server.name": "(MYSERVER)",
"server.port": "(PORT)",
"username": "(MYUSER)",
"password": "(MYPASS)",
"change.tracking.tables": "(MYTABLE)"
}
}
然后运行curl -XPOST -H "Content-Type: application/json" --data @/path/to/mssqlsource.json connect-host:8083/connectors
这将为您提供更好的错误消息,并且更接近于使用Kafka Connect的真实示例,因为confluent
命令行适用于本地开发环境。