我有一个类型为Array[RDD[(String, Set[String])]]的RDD数组,其中每个RDD都是键和值的元组。键是字符串,值是设置[字符串],我想用同一个键合并/合并 Set。我试图在 scala 中做到这一点,但没有快乐。你能帮帮我吗?
e.g.
RDD["A",Set("1","2")]
RDD["A",Set("3","4")]
RDD["B",Set("1","2")]
RDD["B",Set("3","4")]
RDD["C",Set("1","2")]
RDD["C",Set("3","4")]
After transformation:
RDD["A",Set("1","2","3","4")]
RDD["B",Set("1","2","3","4")]
RDD["C",Set("1","2","3","4")]
如果单个RDD
作为输出是可以的(真的看不出有任何理由制作许多RDD中只有1条记录),您可以将RDD
Array
减少到单个RDD
,然后执行groupByKey
:
arr.reduce( _ ++ _ )
.groupByKey
.mapValues(_.flatMap(identity))
考试:
scala> val x = sc.parallelize( List( ("A", Set(1,2)) ) )
scala> val x2 = sc.parallelize( List( ("A", Set(3,4)) ) )
scala> val arr = Array(x,x2)
arr: Array[org.apache.spark.rdd.RDD[(String, scala.collection.immutable.Set[Int])]] = Array(ParallelCollectionRDD[0] at parallelize at <console>:27, ParallelCollectionRDD[1] at parallelize at <console>:27)
scala> arr.reduce( _ ++ _ ).groupByKey.mapValues(_.flatMap(identity)).foreach(println)
(A,List(1, 2, 3, 4))
@Edit:我发现这是一个非常糟糕的主意,建议您重新考虑它,但是您可以通过从上面获取所有密钥并多次过滤RDD来获得所需的结果:
val sub = arr.reduce( _ ++ _ ).groupByKey.mapValues(_.flatMap(identity))
val keys = sub.map(_._1).collect()
val result = for(k <- keys) yield sub.filter(_._1 == k)
result: Array[org.apache.spark.rdd.RDD[(String, Iterable[Int])]]
每个RDD
都有一个元组,不要真的认为它非常有用,高性能。