将RDD从类型'org.apache.spark.rdd.RDD[((String, String), Double)



我有一个RDD :

  val rdd: org.apache.spark.rdd.RDD[((String, String), Double)] =
    sc.parallelize(List(
      (("a", "b"), 1.0),
      (("a", "c"), 3.0),
      (("a", "d"), 2.0)
      )) 

我正在尝试将此RDD从类型org.apache.spark.rdd.RDD[((String, String), Double)]转换为org.apache.spark.rdd.RDD[((String), List[Double])]

RDD 中的每个键都应该是唯一的,并且其值应该排序。

所以上面的rdd结构将转换为:

val newRdd : [((String), List[Double])] = RDD("a" , List(1,2,3))

要获得我使用的唯一密钥列表:

val r2 : org.apache.spark.rdd.RDD[(String, Double)] =  rdd.map(m => (m._1._1 , m._2))

如何转换每个键以包含排序的双打列表?

整个代码 :

import org.apache.spark.SparkContext;
object group {
  println("Welcome to the Scala worksheet")       //> Welcome to the Scala worksheet
  val conf = new org.apache.spark.SparkConf()
    .setMaster("local")
    .setAppName("distances")
    .setSparkHome("C:\spark-1.1.0-bin-hadoop2.4\spark-1.1.0-bin-hadoop2.4")
    .set("spark.executor.memory", "1g")           //> conf  : org.apache.spark.SparkConf = org.apache.spark.SparkConf@1bd0dd4
  val sc = new SparkContext(conf)                 //> 14/12/16 16:44:56 INFO spark.SecurityManager: Changing view acls to: a511381
                                                  //| ,
                                                  //| 14/12/16 16:44:56 INFO spark.SecurityManager: Changing modify acls to: a5113
                                                  //| 81,
                                                  //| 14/12/16 16:44:56 INFO spark.SecurityManager: SecurityManager: authenticatio
                                                  //| n disabled; ui acls disabled; users with view permissions: Set(a511381, ); u
                                                  //| sers with modify permissions: Set(a511381, )
                                                  //| 14/12/16 16:44:57 INFO slf4j.Slf4jLogger: Slf4jLogger started
                                                  //| 14/12/16 16:44:57 INFO Remoting: Starting remoting
                                                  //| 14/12/16 16:44:57 INFO Remoting: Remoting started; listening on addresses :[
                                                  //| akka.tcp://sparkDriver@LA342399.dmn1.fmr.com:51092]
                                                  //| 14/12/16 16:44:57 INFO Remoting: Remoting now listens on addresses: [akka.tc
                                                  //| p://sparkDriver@LA342399.dmn1.fmr.com:51092]
                                                  //| 14/12/16 16:44:57 INFO util.Utils: Successfully started service 'sparkDriver
                                                  //| ' on port 51092.
                                                  //| 14/12/16 16:44:57 INFO spark.SparkEnv: Registering MapOutputTracker
                                                  //| 14/12/16 16:44:57 INFO spark.SparkEnv:
                                                  //| Output exceeds cutoff limit.
  val rdd: org.apache.spark.rdd.RDD[((String, String), Double)] =
    sc.parallelize(List(
      (("a", "b"), 1.0),
      (("a", "c"), 3.0),
      (("a", "d"), 2.0)
      ))                                          //> rdd  : org.apache.spark.rdd.RDD[((String, String), Double)] = ParallelCollec
                                                  //| tionRDD[0] at parallelize at group.scala:15
     val r2 : org.apache.spark.rdd.RDD[(String, Double)] =  rdd.map(m => (m._1._1 , m._2))
                                                  //> r2  : org.apache.spark.rdd.RDD[(String, Double)] = MappedRDD[1] at map at gr
                                                  //| oup.scala:21
     val m1 = r2.collect                          //> 14/12/16 16:44:59 INFO spark.SparkContext: Starting job: collect at group.sc
                                                  //| ala:23
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Got job 0 (collect at group.s
                                                  //| cala:23) with 1 output partitions (allowLocal=false)
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Final stage: Stage 0(collect 
                                                  //| at group.scala:23)
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Parents of final stage: List(
                                                  //| )
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Missing parents: List()
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Submitting Stage 0 (MappedRDD
                                                  //| [1] at map at group.scala:21), which has no missing parents
                                                  //| 14/12/16 16:44:59 WARN util.SizeEstimator: Failed to check whether UseCompre
                                                  //| ssedOops is set; assuming yes
                                                  //| 14/12/16 16:44:59 INFO storage.MemoryStore: ensureFreeSpace(1584) called wit
                                                  //| h curMem=0, maxMem=140142182
                                                  //| 14/12/16 16:44:59 INFO storage.MemoryStore: Block broadcast_0 stored as valu
                                                  //| es in memory (estimated size 1584.0 B
                                                  //| Output exceeds cutoff limit.
     m1.foreach { case (e, i) => println(e + "," + i) }
                                                  //> a,1.0
                                                  //| a,3.0
                                                  //| a,2.0

}
嗨,

使用@Imm解决方案,您的值将不会排序,如果发生这种情况,将是伤亡。要获得排序列表,您只需添加:

val r4 = r3.mapValues(_.toList.sorted( 所以 r4 将有一个 rdd,每个值列表将针对每个键进行排序

我希望这会有用

使用 groupByKey

val r3: RDD[String, Iterable[Double]] = r2.groupByKey

如果您真的希望第二个元素是List而不是一般Iterable那么您可以使用mapValues

val r4 = r3.mapValues(_.toList)

确保您import org.apache.spark.SparkContext._顶部,以便这些功能可用。

最新更新