我有一个看起来像这样的数据框架
+---+----+------+-------+------+
| Id|fomrid|values|occ| comments
+---+----+------+-------+------+
| 1| x1 | 22.0| 1| text1|
| 1| x1 | test| 2| text2 |
| 1| x1 | 11| 3| text3 |
| 1| x2 | 21 | 0 | text4 |
| 2| p1 | 1 | 1| text5 |
+---+----+------+-------+------+
如何将其转换为下面的数据框架?实际上,我想创建一个基于formId的值和occ列表。
+---+------+--------------+--------+------+
| Id|fomrid|List_values |List_occ| comments
+---+------+--------------+--------+------+
| 1| x1 |[22.0, test,11]|[1,2,3]| text1|
| 1| x2 | [21] | [0] | text4 |
| 2| p1 | [1] | [1] | text5 |
+---+-----+---------------+-------+-------+
您可以使用collect_list
来实现这一点。
使用spark sql创建一个临时视图并在你的spark会话上运行
input_df.createOrReplaceTempView("my_temp_table_or_view")
output_df = sparkSession.sql("<insert sql below here>")
SELECT
Id,
fomrid,
collect_list(values) as List_values,
collect_list(occ) as List_occ,
MIN(comments) as comments
FROM
my_temp_table_or_view
GROUP BY
Id, formrid
使用pyspark api
from pyspark.sql import functions as F
output_df = (
input_df.groupBy(["Id","fomrid"])
.agg(
F.collect_list("values").alias("List_values"),
F.collect_list("occ").alias("List_occ"),
F.min("comments").alias("comments")
)
)
使用scalaval output_df = input_df.groupBy("Id","fomrid")
.agg(
collect_list("values").alias("List_values"),
collect_list("occ").alias("List_occ"),
min("comments").alias("comments")
)
让我知道这是否适合你。