>我正在尝试找到将整个 Spark 数据帧转换为 scala Map 集合的最佳解决方案。最好说明如下:
从这个开始(在 Spark 示例中):
val df = sqlContext.read.json("examples/src/main/resources/people.json")
df.show
+----+-------+
| age| name|
+----+-------+
|null|Michael|
| 30| Andy|
| 19| Justin|
+----+-------+
对于一个 Scala 集合(地图的地图),表示如下:
val people = Map(
Map("age" -> null, "name" -> "Michael"),
Map("age" -> 30, "name" -> "Andy"),
Map("age" -> 19, "name" -> "Justin")
)
我认为
你的问题没有意义——你最外层的Map
,我只看到你试图将值塞进去——你需要在你的最外层Map
有键/值对。话虽如此:
val peopleArray = df.collect.map(r => Map(df.columns.zip(r.toSeq):_*))
会给你:
Array(
Map("age" -> null, "name" -> "Michael"),
Map("age" -> 30, "name" -> "Andy"),
Map("age" -> 19, "name" -> "Justin")
)
此时,您可以执行以下操作:
val people = Map(peopleArray.map(p => (p.getOrElse("name", null), p)):_*)
这会给你:
Map(
("Michael" -> Map("age" -> null, "name" -> "Michael")),
("Andy" -> Map("age" -> 30, "name" -> "Andy")),
("Justin" -> Map("age" -> 19, "name" -> "Justin"))
)
我猜这真的是你想要的。如果要在任意Long
索引上键入它们,可以执行以下操作:
val indexedPeople = Map(peopleArray.zipWithIndex.map(r => (r._2, r._1)):_*)
这为您提供:
Map(
(0 -> Map("age" -> null, "name" -> "Michael")),
(1 -> Map("age" -> 30, "name" -> "Andy")),
(2 -> Map("age" -> 19, "name" -> "Justin"))
)
首先从数据帧获取架构
val schemaList = dataframe.schema.map(_.name).zipWithIndex//get schema list from dataframe
从数据帧获取 rdd 并与之映射
dataframe.rdd.map(row =>
//here rec._1 is column name and rce._2 index
schemaList.map(rec => (rec._1, row(rec._2))).toMap
).collect.foreach(println)
val map =df.collect.map(a=>(a(0)->a(1))).toMap.asInstanceOf[Map[String,String]]
如果结果需要在映射而不是数组(map)中