找不到实现'org.apache.flink.table.factories.DeserializationFormatFactory'的标识符'avro-confluent'的任何工厂



我有一个Flink作业在本地运行良好,但当我试图在集群上Flink运行作业时失败。当试图通过'connector' = ' Kafka '从Kafka加载数据时发生错误。我使用Flink-Table API和confluent-avro格式从Kafka读取数据。

所以基本上我创建了一个从kafka topic读取数据的表:

val inputTableSQL =
s"""CREATE TABLE input_table (
|  -- key of the topic
|  key BYTES NOT NULL,
|
|  -- a few columns mapped to the Avro fields of the Kafka value
|  id STRING,
|
|) WITH (
|
|  'connector' = 'kafka',
|  'topic' = '${KafkaConfiguration.InputTopicName}',
|  'scan.startup.mode' = 'latest-offset',
|
|  -- UTF-8 string as Kafka keys, using the 'key' table column
|  'key.format' = 'raw',
|  'key.fields' = 'key',
|
|  'value.format' = 'avro-confluent',
|  'value.avro-confluent.schema-registry.url' = '${KafkaConfiguration.KafkaConsumerSchemaRegistryUrl}',
|  'value.fields-include' = 'EXCEPT_KEY'
|)
|""".stripMargin
val inputTable = tableEnv.executeSql(inputTableSQL)

然后我创建了另一个表,我将使用它作为输出表:

val outputTableSQL =
s"""CREATE TABLE custom_avro_output_table (
|  -- key of the topic
|  key BYTES NOT NULL,
|
|  -- a few columns mapped to the Avro fields of the Kafka value
|  ID STRING
|) WITH (
|
|  'connector' = 'kafka',
|  'topic' = '${KafkaConfiguration.OutputTopicName}',
|  'properties.bootstrap.servers' = '${KafkaConfiguration.KafkaProducerBootstrapServers}',
|
|  -- UTF-8 string as Kafka keys, using the 'key' table column
|  'key.format' = 'raw',
|  'key.fields' = 'key',
|
|  $outputFormatSettings
|  'value.fields-include' = 'EXCEPT_KEY'
|)
|""".stripMargin
val outputTableCreationResult = tableEnv.executeSql(outputTableSQL)

val customInsertSQL =
"""INSERT INTO custom_avro_output_table
|SELECT key, id
|  FROM input_table
| WHERE userAgent LIKE '%ost%'
|""".stripMargin
val customInsertResult = tableEnv.executeSql(customInsertSQL)

当我在本地机器上运行时,一切正常,但当我在集群中运行时,它崩溃了。

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_282]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_282]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_282]
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
... 13 more
Caused by: org.apache.flink.table.api.ValidationException: Could not find any factory for identifier 'avro-confluent' that implements 'org.apache.flink.table.factories.DeserializationFormatFactory' in the classpath.
Available factory identifiers are:
canal-json
csv
debezium-json
json
maxwell-json
raw
at org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:319) ~[flink-table_2.12-1.13.1.jar:1.13.1]
at org.apache.flink.table.factories.FactoryUtil$TableFactoryHelper.discoverOptionalFormatFactory(FactoryUtil.java:751) ~[flink-table_2.12-1.13.1.jar:1.13.1]
at org.apache.flink.table.factories.FactoryUtil$TableFactoryHelper.discoverOptionalDecodingFormat(FactoryUtil.java:649) ~[flink-table_2.12-1.13.1.jar:1.13.1]
at org.apache.flink.table.factories.FactoryUtil$TableFactoryHelper.discoverDecodingFormat(FactoryUtil.java:633) ~[flink-table_2.12-1.13.1.jar:1.13.1]
at org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory.lambda$getValueDecodingFormat$2(KafkaDynamicTableFactory.java:279) ~[?:?]
at java.util.Optional.orElseGet(Optional.java:267) ~[?:1.8.0_282]
at org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory.getValueDecodingFormat(KafkaDynamicTableFactory.java:277) ~[?:?]
at org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory.createDynamicTableSource(KafkaDynamicTableFactory.java:142) ~[?:?]
at org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:134) ~[flink-table_2.12-1.13.1.jar:1.13.1]
at org.apache.flink.table.planner.plan.schema.CatalogSourceTable.createDynamicTableSource(CatalogSourceTable.java:116) ~[flink-table-blink_2.12-1.13.1.jar:1.13.1]
at org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:82) ~[flink-table-blink_2.12-1.13.1.jar:1.13.1]
at org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3585) ~[flink-table_2.12-1.13.1.jar:1.13.1]

下面是我的build.sbt:

val flinkVersion = "1.13.1"
val flinkDependencies = Seq(
"org.apache.flink" %% "flink-scala" % flinkVersion % Provided,
"org.apache.flink" %% "flink-streaming-scala" % flinkVersion % Provided,
"org.apache.flink" %% "flink-connector-kafka" % flinkVersion,
"org.apache.flink" %% "flink-clients" % flinkVersion % Provided,
"org.apache.flink" %% "flink-table-api-scala-bridge" % flinkVersion % Provided,
"org.apache.flink" %% "flink-table-planner-blink"  % flinkVersion % Provided,
"org.apache.flink" % "flink-table-common"  % flinkVersion % Provided,
"org.apache.flink" % "flink-avro-confluent-registry" % flinkVersion,
"org.apache.flink" % "flink-json" % flinkVersion,
"com.webtrekk" % "wd.generated" % "2.2.3",
"com.webtrekk" % "wd.generated.public" % "2.2.0",
"ch.qos.logback" % "logback-classic" % "1.2.3"
)

在Flink 1.12中发布了类似的问题无法找到标识符'kafka'org.apache.flink.table.factories.DynamicTableFactory'在类路径中但是添加提供的解决方案在我的情况下不起作用。

Flink使用Java的ServiceLoader查找给定格式的Factory-参见FactoryUtil.discoverFactoriesServiceLoader通过读取文件META-INF/services/<class-name>(在本例中,Factory类的FQN)来工作-参见ServiceLoader文档。

因此,为了找到Avro格式,它的一个条目需要在相应的services/文件中结束。当使用maven-shade-plugin时,可以使用ServicesResourceTransformer。对于SBT,接受的答案应该有效。

我能够使用以下方法解决这个问题:

在我的构建。因此,存在以下mergeStrategy:

lazy val mergeStrategy = Seq(
assembly / assemblyMergeStrategy := {
case "application.conf" => MergeStrategy.concat
case "reference.conf" => MergeStrategy.concat
case m if m.toLowerCase.endsWith("manifest.mf") => MergeStrategy.discard
case m if m.toLowerCase.matches("meta-inf.*\.sf$") => MergeStrategy.discard
case _ => MergeStrategy.first
}
)

我在其中添加了以下块,因此解决了我的异常:

case "META-INF/services/org.apache.flink.table.factories.Factory"  => MergeStrategy.concat
case "META-INF/services/org.apache.flink.table.factories.TableFactory"  => MergeStrategy.concat

最新更新