PySpark获取列的最大和最小非零值



我有一个如下的数据帧:

+-------+--------------------+--------------------+--------------+---------+----------+
|  label|              app_id|               title|download_count|entity_id|risk_score|
+-------+--------------------+--------------------+--------------+---------+----------+
|ANDROID|com.aaron.test.ze...|          Aaron Test|             0|      124|       100|
|ANDROID|com.boulderdailyc...|Boulder Daily Cam...|           100|      122|       100|
|ANDROID|com.communitybank...|     Budgeting Tools|             0|      123|       100|
|ANDROID|com.communitybank...|   PB Mobile Banking|           600|      123|       100|
|ANDROID|com.mendocinobeac...|Mendocino Beacon ...|            10|      122|       100|
|ANDROID|com.profitstars.t...|Johnson City Mobi...|           500|      123|       100|
|ANDROID|com.spreedinc.pro...|Oneida Dispatch f...|          1000|      122|       100|
+-------+--------------------+--------------------+--------------+---------+----------+

我希望获得按实体ID分组的非零最大和最小download_count值。我不太确定如何使用聚合来实现这一点,当然简单的最大和最小值是行不通的。

apps_by_entity = (
group_by_entity_id(df)
.agg(F.min(df.download_count), F.max(df.download_count), F.count("entity_id").alias("app_count"))
.withColumnRenamed("max(download_count)", "download_max")
.withColumnRenamed("min(download_count)", "download_min")
)

因为对于实体123和124的最小值,这将得到0。

+---------+------------+------------+---------+
|entity_id|download_min|download_max|app_count|
+---------+------------+------------+---------+
|      124|           0|           0|        1|
|      123|           0|         600|        3|
|      122|          10|        1000|        3|
+---------+------------+------------+---------+

所需的输出看起来像

+---------+------------+------------+---------+
|entity_id|download_min|download_max|app_count|
+---------+------------+------------+---------+
|      124|           0|           0|        1|
|      123|         500|         600|        3|
|      122|          10|        1000|        3|
+---------+------------+------------+---------+

有没有一种方法可以通过聚合来实现这一点?如果没有,获得这个非零值的最佳方法是什么?在max = min = 0的情况下,仅返回0null将是好的。

我不确定在进行最小和最大聚合时是否可以排除零,而不会丢失计数
实现输出的一种方法是分别执行(min,max(和计数聚合,然后将它们重新连接起来。

from pyspark.sql.functions import *
from pyspark.sql import functions as F
min_max_df = df.filter(col("download_count")!=0).groupBy('entity_id')
.agg(F.min('download_count').alias("download_min"),
F.max('download_count').alias("download_max"))
.withColumnRenamed("entity_id", "entity_id_1")
count_df =df.groupBy('entity_id').agg(F.count('download_count')
.alias("app_count"))
count_df.join(min_max_df, (count_df.entity_id == min_max_df.entity_id_1), 
"left").drop("entity_id_1").fillna(0, subset=['download_min',
'download_max']).show()
+---------+---------+------------+------------+
|entity_id|app_count|download_min|download_max|
+---------+---------+------------+------------+
|      124|        1|           0|           0|
|      123|        3|         500|         600|
|      122|        3|          10|        1000|
+---------+---------+------------+------------+