Spark Analysis在Spark SQL中"flattening"数据帧时的异常



我正在使用这里给出的方法在Spark SQL中扁平化数据帧。 这是我的代码:

package com.acme.etl.xml
import org.apache.spark.sql.types._ 
import org.apache.spark.sql.{Column, SparkSession}
object RuntimeError {   def main(args: Array[String]): Unit = {
    val spark = SparkSession.builder().appName("FlattenSchema").getOrCreate()
    val rowTag = "idocData"
    val dataFrameReader =
        spark.read
          .option("rowTag", rowTag)
    val xmlUri = "bad_011_1.xml"
    val df =
        dataFrameReader
          .format("xml")
          .load(xmlUri)
    val schema: StructType = df.schema
    val columns: Array[Column] = flattenSchema(schema)
    val df2 = df.select(columns: _*)
  }
  def flattenSchema(schema: StructType, prefix: String = null) : Array[Column] = {
    schema.fields.flatMap(f => {
      val colName: String = if (prefix == null) f.name else prefix + "." + f.name
      val dataType = f.dataType
      dataType match {
        case st: StructType => flattenSchema(st, colName)
        case _: StringType => Array(new org.apache.spark.sql.Column(colName))
        case _: LongType => Array(new org.apache.spark.sql.Column(colName))
        case _: DoubleType => Array(new org.apache.spark.sql.Column(colName))
        case arrayType: ArrayType => arrayType.elementType match {
          case structType: StructType => flattenSchema(structType, colName)
        }
        case _ => Array(new org.apache.spark.sql.Column(colName))
      }
    })
  }
}

大多数时候,这工作正常。 但是对于下面给出的 XML:

<Receive xmlns="http://Microsoft.LobServices.Sap/2007/03/Idoc/3/ORDERS05/ZORDERS5/702/Receive">
    <idocData>
        <E2EDP01008GRP xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Idoc/3/ORDERS05/ZORDERS5/702">
            <E2EDPT1001GRP>
                <E2EDPT2001>
                    <DATAHEADERCOLUMN_DOCNUM>0000000141036013</DATAHEADERCOLUMN_DOCNUM>
                </E2EDPT2001>
                <E2EDPT2001>
                    <DATAHEADERCOLUMN_DOCNUM>0000000141036013</DATAHEADERCOLUMN_DOCNUM>
                </E2EDPT2001>
            </E2EDPT1001GRP>
        </E2EDP01008GRP>
        <E2EDP01008GRP xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Idoc/3/ORDERS05/ZORDERS5/702">
        </E2EDP01008GRP>
    </idocData>
</Receive>

发生以下异常:

Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve '`E2EDP01008GRP`.`E2EDPT1001GRP`.`E2EDPT2001`['DATAHEADERCOLUMN_DOCNUM']' due to data type mismatch: argument 2 requires integral type, however, ''DATAHEADERCOLUMN_DOCNUM'' is of string type.;;
'Project [E2EDP01008GRP#0.E2EDPT1001GRP.E2EDPT2001[DATAHEADERCOLUMN_DOCNUM] AS DATAHEADERCOLUMN_DOCNUM#3, E2EDP01008GRP#0._VALUE AS _VALUE#4, E2EDP01008GRP#0._xmlns AS _xmlns#5]
+- Relation[E2EDP01008GRP#0] XmlRelation(<function0>,Some(/Users/paulreiners/s3/cdi-events-partition-staging/content_acme_purchase_order_json_v1/bad_011_1.xml),Map(rowtag -> idocData, path -> /Users/paulreiners/s3/cdi-events-partition-staging/content_acme_purchase_order_json_v1/bad_011_1.xml),null)

是什么原因造成的?

您的文档包含一个多值数组,因此您无法一次完全展平它,因为您无法为数组的两个元素提供相同的列名。此外,在列名中使用点通常是一个坏主意,因为它很容易混淆 Spark 解析器,并且需要始终进行转义。

平展此类数据集的常用方法是为数组的每个元素创建新行。可以使用 explode 函数执行此操作,但需要递归调用平展操作explode因为无法嵌套。

以下代码按预期工作,使用"_"而不是"."作为列名分隔符:

import org.apache.spark.sql.types._ 
import org.apache.spark.sql.{Column, SparkSession}
import org.apache.spark.sql.{Dataset, Row}
object RuntimeError {   
  def main(args: Array[String]): Unit = {
    val spark = SparkSession.builder().appName("FlattenSchema").getOrCreate()
    val rowTag = "idocData"
    val dataFrameReader = spark.read.option("rowTag", rowTag)
    val xmlUri = "bad_011_1.xml"
    val df = dataFrameReader.format("xml").load(xmlUri)
    val df2 = flatten(df)
  }
  def flatten(df: Dataset[Row], prefixSeparator: String = "_") : Dataset[Row] = {
    import org.apache.spark.sql.functions.{col,explode}
    def mustFlatten(sc: StructType): Boolean =
      sc.fields.exists(f => f.dataType.isInstanceOf[ArrayType] || f.dataType.isInstanceOf[StructType])
    def flattenAndExplodeOne(sc: StructType, parent: Column = null, prefix: String = null, cols: Array[(DataType,Column)] = Array[(DataType,Column)]()): Array[(DataType,Column)] = {
      val res = sc.fields.foldLeft(cols)( (columns, f) => {
        val my_col = if (parent == null) col(f.name) else parent.getItem(f.name)
        val flat_name = if (prefix == null) f.name else s"${prefix}${prefixSeparator}${f.name}"
        f.dataType match {
          case st: StructType => flattenAndExplodeOne(st, my_col, flat_name, columns)
          case dt: ArrayType => {
            if (columns.exists(_._1.isInstanceOf[ArrayType])) {
              columns :+ ((dt,  my_col.as(flat_name)))
            } else {
              columns :+ ((dt, explode(my_col).as(flat_name)))
            }
          }
          case dt => columns :+ ((dt, my_col.as(flat_name)))
        }
      })
      res
    }
    var flatDf = df
    while (mustFlatten(flatDf.schema)) {
      val newColumns = flattenAndExplodeOne(flatDf.schema, null, null).map(_._2)
      flatDf = flatDf.select(newColumns:_*)
    }
    flatDf
  }
}

生成的 df2 具有以下架构和数据:

df2.printSchema
root
 |-- E2EDP01008GRP_E2EDPT1001GRP_E2EDPT2001_DATAHEADERCOLUMN_DOCNUM: long (nullable = true)
 |-- E2EDP01008GRP__xmlns: string (nullable = true)

df2.show(true)
+--------------------------------------------------------------+--------------------+
|E2EDP01008GRP_E2EDPT1001GRP_E2EDPT2001_DATAHEADERCOLUMN_DOCNUM|E2EDP01008GRP__xmlns|
+--------------------------------------------------------------+--------------------+
|                                                     141036013|http://Microsoft....|
|                                                     141036013|http://Microsoft....|
+--------------------------------------------------------------+--------------------+

相关内容

  • 没有找到相关文章

最新更新