我们正在尝试在RDD上使用聚合函数。其中RDD的形式是RDD[Strng,Int,String]
给定元素列表
List(("Amit1",45,"M"),("Amit2",43,"M"),("Amit3",28,"F"),("Amit4",33,"F"),("Amit5",18,"F")
Steps :-
val data = sc.parallelize(List(("Amit1",45,"M"),("Amit2",43,"M"),("Amit3",28,"F"),("Amit4",33,"F"),("Amit5",18,"F")))
val dataMap = data.map(line => (line._1 , (line._2,line._3)))
dataMap.collect().foreach(println)
输出是:-
(Amit1,(45,M))
(Amit2,(43,M))
(Amit3,(28,F))
(Amit4,(33,F))
(Amit5,(18,F))
现在,我们尝试的是使用 aggregateByKey,因为我们需要对学生进行分组,然后应用年龄总和以及相似学生的数量我们使用了聚合键,但不起作用。
我们找到了方法:-
dataMap.aggregateByKey((0,0))((acc,value)=>(acc._1 +value._1,acc._2+1) ,(acc1,acc2)=>(acc1._1+acc2._1 , acc1._2 + acc2._2) ).collect()
输出将是
res27: Array[(String, (Int, Int))] = Array((Amit3,(28,1)), (Amit4,(33,1)), (Amit1,(63,2)), (Amit2,(43,1)))
谢谢
无需创建已创建的dataMap
即可完成
val data = sc.sparkContext.parallelize(List(("Amit1",45,"M"),("Amit2",43,"M"),("Amit3",28,"F"),("Amit4",33,"F"),("Amit5",18,"F")))
println(data.count())
println(data.map(_._2).sum())
如果进一步实现需要dataMap
,请使用以下代码:
val dataMap = data.map(line => (line._1 , (line._2,line._3)))
dataMap.collect().foreach(println)
println(dataMap.count())
println(dataMap.map(_._2._1).sum())