使用Scala、Spark从输入中提取键值对



给定文件中的输入为:

Maths,K1,A1,K2,A2,K3,A4
Physics,L6,M1,L5,M2,L9,M2

使用Spark和Scala,我如何提取键值对作为RDD,如下所示:

Maths, K1
Maths, K2
Maths, K3
Physics, L6
Physics, L5
Physics, L9

要用数据创建Spark数据帧,可以按照以下进行操作

// If the examples were lists of items
val l1 = List("Maths", "K1", "A1", "K2", "A2", "K3", "A4")
// If they were strings, you can proceed like this
val l2 = "Physics,L6,M1,L5,M2,L9,M2".split(",").toSeq 
// toDF() takes a sequence of tuples, which we now can create from our list(s)
val res = l1.tail.map(l1.head -> _).toDF("Subject", "Code")
.union(l2.tail.map(l2.head -> _).toDF("Subject", "Code"))
// If the filtering in your example was intentional
res.filter("Code not like 'A%' and code not like 'M%'").show
+-------+----+
|Subject|Code|
+-------+----+
|  Maths|  K1|
|  Maths|  K2|
|  Maths|  K3|
|Physics|  L6|
|Physics|  L5|
|Physics|  L9|
+-------+----+

假设我们可以从问题中的两个样本中安全地推断出预期结果,并且假设输入是一系列字符串,那么有一种方法可以实现:

val s = List("Maths,K1,A1,K2,A2,K3,A4","Physics,L6,M1,L5,M2,L9,M2")
val df = s.flatMap(x => {
val t = x.split(",")
(1 until t.size by 2).map(t.head -> t(_))
}).toDF("C1", "C2")

结果数据帧:

+-------+---+
|     C1| C2|
+-------+---+
|  Maths| K1|
|  Maths| K2|
|  Maths| K3|
|Physics| L6|
|Physics| L5|
|Physics| L9|
+-------+---+

最新更新