我们正在运行kafka hdfs sink Connector(版本5.2.1(,并且需要多个嵌套字段对HDFS数据进行分区。Ever Connect无法识别嵌套的字段,并抛出无法找到该字段的错误。BELOW是我们正在使用的连接器配置。HDFS槽不连接通过嵌套字段的支撑分区?我可以使用非嵌套字段
分区{
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"topics.dir": "/projects/test/kafka/logdata/coss",
"avro.codec": "snappy",
"flush.size": "200",
"connect.hdfs.principal": "test@DOMAIN.COM",
"rotate.interval.ms": "500000",
"logs.dir": "/projects/test/kafka/tmp/wal/coss4",
"hdfs.namenode.principal": "hdfs/_HOST@HADOOP.DOMAIN",
"hadoop.conf.dir": "/etc/hdfs",
"topics": "test1",
"connect.hdfs.keytab": "/etc/hdfs-qa/test.keytab",
"hdfs.url": "hdfs://nameservice1:8020",
"hdfs.authentication.kerberos": "true",
"name": "hdfs_connector_v1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://myschema:8081",
"partition.field.name": "meta.ID,meta.source,meta.HH",
"partitioner.class": "io.confluent.connect.storage.partitioner.FieldPartitioner"
}
我添加了timestamppartitioner的嵌套字段支持,但是现场分区者仍然具有出色的pr
https://github.com/confluentinc/kafka-connect-storage-common/pull/67