我们正在使用S3 Sink连接器从我们的aws MSK中接收S3桶中的数据。
我们已经在AWS EKS(Kubernetes)上部署了Kafka S3 Sink连接器
当我们启动连接器时,当S3桶上发生多部分上传时出现错误。
我们有S3桶的策略限制作为服务器端加密(AWS-KMS)启用,即我们不能上传没有KMS密钥。
下面是我们用于连接器的配置,下面是您的ref的错误详细信息。
请帮助
{
"name": "kc-s3-nuoAccountIDs-sink",
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"topics": "test-topic",
"tasks.max": "1",
"s3.bucket.name": "s3-bucket-name",
"value.converter.schemas.enable": "false",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"behavior.on.null.values": "ignore",
"schema.compatibility": "NONE",
"partition.duration.ms": 3600000,
"path.format": "'year'=YYYY/'month'=MM/'day'=dd/'hour'=HH",
"locale": "en-US",
"timezone": "UTC",
"partition.field.name": "accountplatform",
"s3.region": "eu-west-2",
"flush.size": 100000
}
kTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.connect.errors.ConnectException: Unable to initiate MultipartUpload
at io.confluent.connect.s3.storage.S3OutputStream.newMultipartUpload(S3OutputStream.java:230)
at io.confluent.connect.s3.storage.S3OutputStream.uploadPart(S3OutputStream.java:139)
at io.confluent.connect.s3.storage.S3OutputStream.commit(S3OutputStream.java:165)
at io.confluent.connect.s3.format.avro.AvroRecordWriterProvider$1.commit(AvroRecordWriterProvider.java:102)
at io.confluent.connect.s3.format.KeyValueHeaderRecordWriterProvider$1.commit(KeyValueHeaderRecordWriterProvider.java:126)
at
在我们的示例中,我们需要传递S3连接器的KMS密钥设置。
根据官方文档和上面提到的设置,我们在S3连接器配置中使用了以下两个附加设置:"s3.sse.kms.key.id": "<kms-key-id-here>",
"s3.ssea.name": "aws:kms"
我们现在可以在s3桶中获取数据了。