数据:
1,Coke
1,Beans
1,paper
2,Beans
2,Pen
2,Sheets
2,Banana
期望输出:
+---+------------------------+
| 1|Coke,Beans,Paper |
| 2|Beans,Pen,Sheets,Banana |
+---+------------------------+
我能够通过编写sql查询来实现这一点。
val df = sparkSession.read.csv("file_location")
df.registerTempTable("data")
val result = sparkSession
.sql("select _c0 ,concat_ws(',', collect_list(_c1)) as product from data group by _c0")
result.show
请帮助我使用dataframe/Dataset
函数(select, groupby, agg等)实现相同的结果
这很简单,答案就在这里,但我希望我不是在做研究生作业。DataFrame
类似于SQL表,因此您可以使用其方法查询它。
import org.apache.spark.sql.functions._
val df = sc.parallelize(List(
(1, "Coke"),
(1, "Beans"),
(1,"paper"),
(2,"Beans"),
(2,"Pen"),
(2,"Sheets"),
(2,"Banana")
)).toDF("id", "product_name")
df.groupBy("id").agg(concat_ws(",", collect_list("product_name")).as("product_list")).show()
输出为:
+---+-----------------------+
|id |product_list |
+---+-----------------------+
|1 |Coke,Beans,paper |
|2 |Beans,Pen,Sheets,Banana|
+---+-----------------------+