Spark: Group RDD by id



我有两个rdd。在Spark scala中,如果event1001RDD和event2009RDD具有相同的id,我如何加入它们?

val event1001RDD: schemaRDD = [eventtype,id,location,date1]

[1001,4929102,LOC01,2015-01-20 10:44:39]
[1001,4929103,LOC02,2015-01-20 10:44:39]
[1001,4929104,LOC03,2015-01-20 10:44:39]

val event2009RDD: schemaRDD = [eventtype,id,date1,date2]

[2009,4929101,2015-01-20 20:44:39,2015-01-20 20:44:39]
[2009,4929102,2015-01-20 15:44:39,2015-01-20 21:44:39]
[2009,4929103,2015-01-20 14:44:39,2015-01-20 14:44:39]
[2009,4929105,2015-01-20 20:44:39,2015-01-20 20:44:39]

预期结果是:(unique)(按id排序)

[eventtype,id,1001的location,1001的date1,2009的date1,2009的date2]

2009,4929101,NULL,NULL,2015-01-20 20:44:39,2015-01-20 20:44:39
1001,4929102,LOC01,2015-01-20 10:44:39,2015-01-20 15:44:39,2015-01-20 21:44:39
1001,4929103,LOC02,2015-01-20 10:44:39,2015-01-20 14:44:39,2015-01-20 14:44:39
1001,4929104,LOC03,2015-01-20 10:44:39,NULL,NULL
2009,4929105,NULL,NULL,2015-01-20 20:44:39,2015-01-20 20:44:39

注意,对于id 4929102, 1001被用作事件类型。只有在1001中没有任何匹配的id时,才会使用2009事件类型。

可以是RDD[String] - flat。或RDD元组通过aggregateByKey。我只需要遍历RDD

这是一个完全外连接的情况。给你…

d1=[[1001,4929102,"LOC01","2015-01-20 10:44:39"],[1001,4929103,"LOC02","2015-01-20 10:44:39"],[1001,4929104,"LOC03","2015-01-20 10:44:39"]]
d2=[[2009,4929101,"2015-01-20 20:44:39","2015-01-20 20:44:39"],[2009,4929102,"2015-01-20 15:44:39","2015-01-20 21:44:39"],
    [2009,4929103,"2015-01-20 14:44:39","2015-01-20 14:44:39"],[2009,4929105,"2015-01-20 20:44:39","2015-01-20 20:44:39"]]
d1RDD = sc.parallelize(d1).map(lambda t: Row(d1_eventtype=t[0],d1_id=t[1],d1_location=t[2],d1_date1=t[3]))
d2RDD = sc.parallelize(d2).map(lambda t: Row(d2_eventtype=t[0],d2_id=t[1],d2_date1=t[2],d2_date2=t[3]))
d1DF = ssc.createDataFrame(d1RDD)
d2DF = ssc.createDataFrame(d2RDD)
print d1DF.printSchema()
print d2DF.printSchema()
d1DF.show()
d2DF.show()
d1DF.registerTempTable("d1")
d2DF.registerTempTable("d2")
res = ssc.sql("select case when d1.d1_eventtype is not null then d1.d1_eventtype else d2.d2_eventtype end et, 
                      case when d1.d1_id is not null then d1.d1_id else d2.d2_id end id, 
                      d1.d1_location loc, d1.d1_date1, d2.d2_date1, d2.d2_date2 
                 from d1 full outer join d2 on d1.d1_id=d2.d2_id order by d1.d1_id")
res.show()
结果:

root
 |-- d1_date1: string (nullable = true)
 |-- d1_eventtype: long (nullable = true)
 |-- d1_id: long (nullable = true)
 |-- d1_location: string (nullable = true)
None
root
 |-- d2_date1: string (nullable = true)
 |-- d2_date2: string (nullable = true)
 |-- d2_eventtype: long (nullable = true)
 |-- d2_id: long (nullable = true)
None
d1_date1            d1_eventtype d1_id   d1_location
2015-01-20 10:44:39 1001         4929102 LOC01      
2015-01-20 10:44:39 1001         4929103 LOC02      
2015-01-20 10:44:39 1001         4929104 LOC03      
d2_date1            d2_date2            d2_eventtype d2_id  
2015-01-20 20:44:39 2015-01-20 20:44:39 2009         4929101
2015-01-20 15:44:39 2015-01-20 21:44:39 2009         4929102
2015-01-20 14:44:39 2015-01-20 14:44:39 2009         4929103
2015-01-20 20:44:39 2015-01-20 20:44:39 2009         4929105
et   id      loc   d1_date1            d2_date1            d2_date2           
2009 4929101 null  null                2015-01-20 20:44:39 2015-01-20 20:44:39
2009 4929105 null  null                2015-01-20 20:44:39 2015-01-20 20:44:39
1001 4929102 LOC01 2015-01-20 10:44:39 2015-01-20 15:44:39 2015-01-20 21:44:39
1001 4929103 LOC02 2015-01-20 10:44:39 2015-01-20 14:44:39 2015-01-20 14:44:39
1001 4929104 LOC03 2015-01-20 10:44:39 null                null      

相关内容

  • 没有找到相关文章

最新更新