具有多个聚合的Pyspark Pivot



我有:

d = [(100,1,23,10),(100,2,45,11),(100,3,67,12),(100,4,78,13),(101,1,23,10),(101,2,45,13),(101,3,67,14),(101,4,78,15),(102,1,23,10),(102,2,45,11),(102,3,67,16),(102,4,78,18)]
mydf = spark.createDataFrame(d,['id','day','price','units'])
mydf.show()
+---+---+-----+-----+
| id|day|price|units|
+---+---+-----+-----+
|100|  1|   23|   10|
|100|  2|   45|   11|
|100|  3|   67|   12|
|100|  4|   78|   13|
|101|  1|   23|   10|
|101|  2|   45|   13|
|101|  3|   67|   14|
|101|  4|   78|   15|
|102|  1|   23|   10|
|102|  2|   45|   11|
|102|  3|   67|   16|
|102|  4|   78|   18|
+---+---+-----+-----+

我想要:

+---+-----+----+----+----+----+
| id|  ?  |   1|   2|   3|   4|
+---+-----+----+----+----+----+
|100|units|10.0|11.0|12.0|13.0|
|101|units|10.0|13.0|14.0|15.0|
|102|units|10.0|11.0|16.0|18.0|
|100|price|23.0|45.0|67.0|78.0|
|101|price|23.0|45.0|67.0|78.0|
|102|price|23.0|45.0|67.0|78.0|
+---+-----+----+----+----+----+

其中,每个值是priceunits的平均值。

我可以做两个枢轴,由priceunits聚合,比如:

mydf.groupby('id').pivot('day').agg(F.mean('units')).show()mydf.groupby('id').pivot('day').agg(F.mean('price')).show()

然后加入。

但我觉得有一种更聪明的方法可以做到这一点。有吗?

这里有一种方法(基本上熔化DF,然后旋转(

# First combine price and units into a map column
mydf = mydf.withColumn("price_units", F.create_map(F.lit("price"), "price", F.lit("units"), "units"))
# Now explode to get a melted dataframe
mydf = mydf.select("id", "day", F.explode("price_units").alias("name", "value"))
+---+---+-----+-----+
| id|day| name|value|
+---+---+-----+-----+
|100|  1|price|   23|
|100|  1|units|   10|
|100|  2|price|   45|
|100|  2|units|   11|
|100|  3|price|   67|
etc
# Then pivot
mydf.groupby("id", "name").pivot("day").agg(F.mean("value")).show()
+---+-----+----+----+----+----+
| id| name|   1|   2|   3|   4|
+---+-----+----+----+----+----+
|100|price|23.0|45.0|67.0|78.0|
|101|price|23.0|45.0|67.0|78.0|
|102|units|10.0|11.0|16.0|18.0|
|100|units|10.0|11.0|12.0|13.0|
|101|units|10.0|13.0|14.0|15.0|
|102|price|23.0|45.0|67.0|78.0|
+---+-----+----+----+----+----+

最新更新