使用Confluent S3汇聚连接器时,发生以下错误:
[2021-08-08 02:25:15,588] ERROR WorkerSinkTask{id=s3-test-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover unt
il manually restarted. Error: com/google/common/base/Preconditions (org.apache.kafka.connect.runtime.WorkerSinkTask:607)
java.lang.NoClassDefFoundError: com/google/common/base/Preconditions
at org.apache.hadoop.conf.Configuration$DeprecationDelta.<init>(Configuration.java:379)
at org.apache.hadoop.conf.Configuration$DeprecationDelta.<init>(Configuration.java:392)
at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:474)
at org.apache.parquet.hadoop.ParquetWriter$Builder.<init>(ParquetWriter.java:345)
at org.apache.parquet.avro.AvroParquetWriter$Builder.<init>(AvroParquetWriter.java:162)
at org.apache.parquet.avro.AvroParquetWriter$Builder.<init>(AvroParquetWriter.java:153)
at org.apache.parquet.avro.AvroParquetWriter.builder(AvroParquetWriter.java:43)
at io.confluent.connect.s3.format.parquet.ParquetRecordWriterProvider$1.write(ParquetRecordWriterProvider.java:79)
at io.confluent.connect.s3.format.KeyValueHeaderRecordWriterProvider$1.write(KeyValueHeaderRecordWriterProvider.java:105)
at io.confluent.connect.s3.TopicPartitionWriter.writeRecord(TopicPartitionWriter.java:532)
at io.confluent.connect.s3.TopicPartitionWriter.checkRotationOrAppend(TopicPartitionWriter.java:302)
at io.confluent.connect.s3.TopicPartitionWriter.executeState(TopicPartitionWriter.java:245) at io.confluent.connect.s3.TopicPartitionWriter.write(TopicPartitionWriter.java:196)
at io.confluent.connect.s3.S3SinkTask.put(S3SinkTask.java:234) at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:581) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:182)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:231)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
在5.5,10.0.0和10.0.1中发生。
这只发生在Parquet上,而Arvo工作得很好。
日志显示分区器和源数据格式工作正常。
[2021-08-08 02:25:15,564] INFO Opening record writer for: xxxxx/xxxxx.xxxxx.users/year=2021/month=08/day=07/xxxxx.xxxxx.tablename+0+0000000000.snappy.parquet
(io.confluent.connect.s3.format.parquet.ParquetRecordWriterProvider:74)
连接器是从Confluent网站手动下载的。
原来hadoop-common
需要Google提供的guava
实用程序,而这在发行版中不知怎么缺少了。
需要在hadoop-common
Maven repo页面找到对应的guava.jar
。然后手动下载guava.jar
到连接器的lib/
文件夹。
似乎有一个条目明确地将guava
从hadoop-common
依赖项中排除,从而导致了这个问题:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>${hadoop.version}</version>
<exclusions>
<exclusion>
<groupId>org.apache.avro</groupId>
<artifactId>avro</artifactId>
</exclusion>
<exclusion>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
</exclusion>
<exclusion>
这真的应该在测试中被发现。