透视火花多级数据集



我在Spark中具有以下模式的Dataset

root
 |-- from: struct (nullable = false)
 |    |-- id: string (nullable = true)
 |    |-- name: string (nullable = true)
 |    |-- tags: string (nullable = true)
 |-- v1: struct (nullable = false)
 |    |-- id: string (nullable = true)
 |    |-- name: string (nullable = true)
 |    |-- tags: string (nullable = true)
 |-- v2: struct (nullable = false)
 |    |-- id: string (nullable = true)
 |    |-- name: string (nullable = true)
 |    |-- tags: string (nullable = true)
 |-- v3: struct (nullable = false)
 |    |-- id: string (nullable = true)
 |    |-- name: string (nullable = true)
 |    |-- tags: string (nullable = true)
 |-- to: struct (nullable = false)
 |    |-- id: string (nullable = true)
 |    |-- name: string (nullable = true)
 |    |-- tags: string (nullable = true)

如何在 Scala 上从这个数据集制作表(只有 3 列 id、名称、标签(?

只需将所有列合并为一个arrayexplode并选择所有嵌套字段:

import org.apache.spark.sql.functions.{array, col, explode}
case class Vertex(id: String, name: String, tags: String)
val df  = Seq(((
  Vertex("1", "from", "a"), Vertex("2", "V1", "b"), Vertex("3", "V2", "c"), 
  Vertex("4", "v3", "d"), Vertex("5", "to", "e")
)).toDF("from", "v1", "v2", "v3", "to")

df.select(explode(array(df.columns map col: _*)).alias("col")).select("col.*")

结果如下:

+---+----+----+
| id|name|tags|
+---+----+----+
|  1|from|   a|
|  2|  V1|   b|
|  3|  V2|   c|
|  4|  v3|   d|
|  5|  to|   e|
+---+----+----+

最新更新