选择包含数组类型和全部平展的普通类型的列


i have a dataframe with these column. Column 1 and Column2 are array and rest of the columns are string I need to get the value of the both using select.
    +-----------+-------------+--------------------+-----------+--------+
    |Column1    |    COLUMN2  |       NAME         |STATUS     |Sequence|
    +-----------+-------------+--------------------+-----------+--------+
    |      [ABC]|        [ABC]|         BILAL AHMAD|       ID-N|       1|
    |      [ABC]|        [ABC]|        JUNAID Ali  |       ID-N|       1|
    |      [ABC]|        [ABC]|         BILAL ZAFAR|       ID-N|       1|
    |      [ABC]|        [ABC]|              KHALID|       ID-N|       1|
    |      [ABC]|        [ABC]|              KASHIF|       ID-N|       1|
    |      [ABC]|        [ABC]|              SALMAN|       ID-N|       2|
    +-----------+-------------+--------------------+-----------+--------+

我尝试使用下面提到的代码。 df.select($"*")

var seqCols = Seq("NAME","STATUS","sequence")
val allColumnsArr  =  "LEAD_CO_MNE" +: seqCols
df.select(colNames1.map(c=> col(c).getItem(0)):_* )

列 1 和列 2 是数组,因此通过使用 getItem(0(,我将获得数组的第一个元素。 但这不适用于字符串列。 字符串列也是动态的,有时它们如上 某些时间字符串列名称类似于"日期","金额","模式">

+-----------+-------------+--------------------+-----------+--------+
|Column1    |    COLUMN2  |       NAME         |STATUS     |Sequence|
+-----------+-------------+--------------------+-----------+--------+
|      ABC  |        ABC  |         BILAL AHMAD|       ID-N|       1|
|      ABC  |        ABC  |        JUNAID Ali  |       ID-N|       1|
|      ABC  |        ABC  |         BILAL ZAFAR|       ID-N|       1|
|      ABC  |        ABC  |      KHALID|       ID-N|       1|
|      ABC  |        ABC  |      KASHIF|       ID-N|       1|
|      ABC  |        ABC  |              SALMAN|       ID-N|       2|
+-----------+-------------+--------------------+-----------+--------+

"[]"从列 1 和列 2 中删除这些括号,现在我将列 1 和列 2 作为字符串

可以检查列类型,如果数组 - 获取第一项:

val df = Seq(
  (Array("ABC"), Array("ABC"), "BILAL AHMAD", "ID-N", 1),
  (Array("ABC"), Array("ABC"), "JUNAID Ali", "ID-N", 1)
).toDF("Column1", "COLUMN2", "NAME", "STATUS", "Sequence")
val columnsToSelect = df.schema.map(c => if (c.dataType.isInstanceOf[ArrayType]) col(c.name).getItem(0).alias(c.name) else col(c.name))
df.select(columnsToSelect: _*)

输出:

+-------+-------+-----------+------+--------+
|Column1|COLUMN2|NAME       |STATUS|Sequence|
+-------+-------+-----------+------+--------+
|ABC    |ABC    |BILAL AHMAD|ID-N  |1       |
|ABC    |ABC    |JUNAID Ali |ID-N  |1       |
+-------+-------+-----------+------+--------+

您可以分解数组列并像往常一样选择

val df = spark.sparkContext.parallelize(Seq(
  (Array("ABC"), Array("ABC"), "BILAL AHMAD", "ID-N", "1"),
  (Array("ABC"), Array("ABC"), "JUNAID Ali", "ID-N", "1"),
  (Array("ABC"), Array("ABC"), "BILAL ZAFAR", "ID-N", "1")
)).toDF("Column1", "COLUMN2", "NAME", "STATUS", "Sequence")

分解数组类型列,或者您也可以在此处仅从数组中选择第一个

val dfNew = df.schema.foldLeft(df) { (acc, schema) =>
  schema.dataType.typeName match {
    case "array" => acc.withColumn(schema.name, explode(col(schema.name)))
    case _ => acc
  }
}
dfNew.select("*").show(false)

输出:

+-------+-------+-----------+------+--------+
|Column1|COLUMN2|NAME       |STATUS|Sequence|
+-------+-------+-----------+------+--------+
|ABC    |ABC    |BILAL AHMAD|ID-N  |1       |
|ABC    |ABC    |JUNAID Ali |ID-N  |1       |
|ABC    |ABC    |BILAL ZAFAR|ID-N  |1       |
+-------+-------+-----------+------+--------+

相关内容

最新更新