Apache Hudi on Dataproc



是否有在Dataproc集群上部署Apache Hudi的指南?我试着通过胡迪快速入门指南,但我不能。

火花3.1.1

Python 3.8.13

Debian 5.10.127 x86_64

启动代码:

pyspark --jars gs://bucket/artifacts/hudi-spark3.1.x_2.12-0.11.1.jar,gs://bucket/artifacts/spark-avro_2.12-3.1.3.jar 
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' 
--conf 'spark.sql.catalog.spark_catalog=org.apache.spark.sql.hudi.catalog.HoodieCatalog'

试题:

dataGen = sc._jvm.org.apache.hudi.QuickstartUtils.DataGenerator()
错误:

Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'JavaPackage' object is not callable

编辑1:

pyspark --jars gs://bucket/artifacts/hudi-spark3.1.x_2.12-0.11.1.jar,gs://bucket/artifacts/spark-avro_2.12-3.1.3.jar --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' --conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'

抛出conf错误

警告org.apache.spark.sql.SparkSession:不能使用org.apache.spark.sql.hudi.HoodieSparkSessionExtension配置会话扩展。java.lang.ClassNotFoundException: org.apache.spark.sql.hudi.HoodieSparkSessionExtension。

,也得到相同的错误尝试sc._jvm.org.apache.hudi.QuickstartUtils.DataGenerator()

编辑2:

我用错了。jar…第一个问题

正确pyspark调用:

pyspark --jars gs://dev-dama-stg-spark/artifacts/hudi-spark3.1-bundle_2.12-0.12.1.jar,gs://dev-dama-stg-spark/artifacts/spark-avro_2.12-3.1.3.jar --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' --conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'

然而,新的错误…在创建表和hudi.options之后:

22/12/01 22:26:04 WARN org.apache.hudi.common.config.DFSPropertiesConfiguration: Cannot find HUDI_CONF_DIR, please set it as the dir of hudi-defaults.conf
22/12/01 22:26:04 WARN org.apache.hudi.common.config.DFSPropertiesConfiguration: Properties file file:/etc/hudi/conf/hudi-defaults.conf not found. Ignoring to load props file
22/12/01 22:26:05 WARN org.apache.hudi.metadata.HoodieBackedTableMetadata: Metadata table was not found at path file:/tmp/hudi_trips_cow/.hoodie/metadata
22/12/01 22:26:07 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 1.0 (TID 2) (... 2): java.io.FileNotFoundException: File file:/tmp/hudi_trips_cow does not exist

任何线索……?

我自己找到了解决办法。

首先,要正确启动pyspark,包括hudi-spark_bundlespark-avrojar。此外,在我的情况下,我想包含一些jdbc jar来连接我的内部部署服务:'
pyspark --jars gs://bucket/artifacts/hudi-spark3.1-bundle_2.12-0.12.1.jar,
gs://bucket/artifacts/spark-avro_2.12-3.1.3.jar,
gs://bucket/artifacts/mssql-jdbc-11.2.1.jre8.jar,
gs://bucket/artifacts/ngdbc-2.12.9.jar 
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' 
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension'

遵循hudi快速指南,唯一需要改变的是:

basePath = "file:///tmp/hudi_trips_cow"

basePath = "gs://bucket/tmp/hudi_trips_cow"

有了这个配置,我可以在Dataproc中正确运行hudi。

如果我发现新的信息,我会在这里发布,作为一个简短的指南。

相关内容

  • 没有找到相关文章

最新更新