我使用官方的 Kafka RabbitMQ 连接器从 RabbitMQ 实例获取 JSOn 消息。
目前,此连接器如果按如下方式配置
name=RabbitMQConsentConsumer
connector.class=io.confluent.connect.rabbitmq.RabbitMQSourceConnector
rabbitmq.host=HOST
rabbitmq.port=PORT
rabbitmq.username=USER
rabbitmq.password=PASSWORD
rabbitmq.virtual.host=vhost
tasks.max=1
kafka.topic=TOPIC
rabbitmq.queue=QUEUE
errors.log.enable=true
errors.log.include.messages=true
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
key.converter.schemas.enable=false
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
value.converter.schemas.enable=false
使用该配置,我可以将我的 JSON 消息作为字节数组注入 Kafka,并被读者识别为 JSON 内容。这是"相当"好的。
但我想从 Kafka Connect 中的 JSON 内容生成密钥。为此,我想使用变压器。
所以我开始玩变形金刚...我添加了以下配置
transforms=MakeMap
transforms.MakeMap.type=org.apache.kafka.connect.transforms.HoistField$Value
transforms.MakeMap.fields=jsonEvent
不幸的是,它失败并显示以下消息
[2019-04-25 08:04:25,128] ERROR Error encountered in task RabbitMQConsentConsumer-0. Executing stage 'VALUE_CONVERTER' with class 'org.apache.kafka.connect.converters.ByteArrayConverter', where source record is = SourceRecord{sourcePartition={routingKey=TOPIC}, sourceOffset={deliveryTag=1}} ConnectRecord{topic='TOPIC', kafkaPartition=null, key=null, keySchema=Schema{STRING}, value=[B@20a106a8, valueSchema=Schema{BYTES}, timestamp=1556179465126, headers=ConnectHeaders(headers=[ConnectHeader(key=rabbitmq.consumer.tag, value=amq.ctag-7PHmHnIJF2WCnxJShauM3A, schema=Schema{STRING}), ConnectHeader(key=rabbitmq.content.type, value=null, schema=Schema{STRING}), ConnectHeader(key=rabbitmq.content.encoding, value=null, schema=Schema{STRING}), ConnectHeader(key=rabbitmq.delivery.mode, value=1, schema=Schema{INT32}), ConnectHeader(key=rabbitmq.priority, value=0, schema=Schema{INT32}), ConnectHeader(key=rabbitmq.correlation.id, value=null, schema=Schema{STRING}), ConnectHeader(key=rabbitmq.reply.to, value=null, schema=Schema{STRING}), ConnectHeader(key=rabbitmq.expiration, value=null, schema=Schema{STRING}), ConnectHeader(key=rabbitmq.message.id, value=null, schema=Schema{STRING}), ConnectHeader(key=rabbitmq.timestamp, value=null, schema=Schema{org.apache.kafka.connect.data.Timestamp:INT64}), ConnectHeader(key=rabbitmq.type, value=null, schema=Schema{STRING}), ConnectHeader(key=rabbitmq.user.id, value=null, schema=Schema{STRING}), ConnectHeader(key=rabbitmq.app.id, value=null, schema=Schema{STRING}), ConnectHeader(key=rabbitmq.delivery.tag, value=1, schema=Schema{INT64}), ConnectHeader(key=rabbitmq.redeliver, value=false, schema=Schema{BOOLEAN}), ConnectHeader(key=rabbitmq.exchange, value=, schema=Schema{STRING}), ConnectHeader(key=rabbitmq.routing.key, value=TOPIC, schema=Schema{STRING})])}. (org.apache.kafka.connect.runtime.errors.LogReporter)
org.apache.kafka.connect.errors.DataException: Invalid schema type for ByteArrayConverter: STRUCT
at org.apache.kafka.connect.converters.ByteArrayConverter.fromConnectData(ByteArrayConverter.java:55)
at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$2(WorkerSourceTask.java:269)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:269)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:228)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2019-04-25 08:04:25,128] INFO WorkerSourceTask{id=RabbitMQConsentConsumer-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)
[2019-04-25 08:04:25,128] INFO WorkerSourceTask{id=RabbitMQConsentConsumer-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask)
[2019-04-25 08:04:25,128] ERROR WorkerSourceTask{id=RabbitMQConsentConsumer-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:269)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:228)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: Invalid schema type for ByteArrayConverter: STRUCT
at org.apache.kafka.connect.converters.ByteArrayConverter.fromConnectData(ByteArrayConverter.java:55)
at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$2(WorkerSourceTask.java:269)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
... 11 more
[2019-04-25 08:04:25,129] ERROR WorkerSourceTask{id=RabbitMQConsentConsumer-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
据我了解,似乎我的org.apache.kafka.connect.transforms.HoistField$Value
转换失败了,因为我的数据是字节数组,而不是结构/映射。考虑到这一点,我可以将我的消息从字节数组"升级"到 json 吗?
的数据管道是,首先从源检索数据,对其应用转换,并在转换后的数据之上应用提供的转换器,然后将其推送到提供的主题中。
现在,在您的情况下,数据可能会在对该数据应用转换后进入字节数组,它被转换为字节数组格式不支持的结构类型。所以,你得到了这个例外。
HostField 用于在为源数据提供架构时将数据包装为 STRUCT 格式。https://docs.confluent.io/current/connect/transforms/hoistfield.html#hoistfield
正如@Robin莫法特所说,请尝试使用JSON转换器并尝试,它应该可以工作。