将数据帧写入 Parquet 文件失败,并显示空架构或嵌套空架构



我对scala和Spark都很陌生。我有一个非常愚蠢的问题。我有一个从弹性搜索创建的数据帧。我正在尝试以镶木地板格式编写该 s3。下面是我看到的代码块和错误。一个好的撒玛利亚人可以请我解决这个问题吗?

val dfSchema = dataFrame.schema.json
//      log.info(dfSchema)
dataFrame
.withColumn("lastFound", functions.date_add(dataFrame.col("last_found"), -457))
.write
.partitionBy("lastFound")
.mode("append")
.format("parquet")
.option("schema", dfSchema)
.save("/tmp/elasticsearch/")
org.apache.spark.sql.AnalysisException: 
Datasource does not support writing empty or nested empty schemas.
Please make sure the data schema has at least one or more column(s).
;
at org.apache.spark.sql.execution.datasources.DataSource$.org$apache$spark$sql$execution$datasources$DataSource$$validateSchema(DataSource.scala:733)
at org.apache.spark.sql.execution.datasources.DataSource.planForWriting(DataSource.scala:523)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:290)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)

以镶木地板格式写入数据时,无需放置架构。

使用追加模式时,您假定已将数据存储在精确路径中,并且要添加新数据。如果要覆盖,可以放置"覆盖"而不是"追加",如果路径是新的,则无需放置任何内容。

当您写入 s3 时,路径通常应如下所示"s3://bucket/the folder">

你能试试这个吗:

dataFrame
.withColumn("lastFound", functions.date_add(dataFrame.col("last_found"), -457))
.write
.partitionBy("lastFound")
.mode("append")
.parquet("/tmp/elasticsearch/")

最新更新