如何在pyspark数据帧中使用列中的条件对值进行筛选和求和



我有pyspark数据帧:

+---+------------------+
|Id |Friend_Ids|Points |
+------+-------+-------+
|1  |[2, 5]    | 5     |
|2  |[1, 3, 4] | 6     |
|3  |[2, 4]    | 2     |
|4  |[3, 5, 2] | 12    |
|5  |[1, 4]    | 1     |
+------+-------+-------+

我想得到一个专栏与每个id的朋友的积分总和:


+---+------------------+-------------------+
|Id |Friend_Ids|Points |Friends_points_sum |
+------+-------+-------+-------------------+
|1  |[2, 5]    | 5     | 7                 |
|2  |[1, 3, 4] | 6     | 19                |
|3  |[2, 4]    | 2     | 18                |
|4  |[2, 3, 5] | 12    | 9                 |
|5  |[1, 4]    | 1     | 17                |
+------+-------+-------+-------------------+

我试过这个

df.withColumn("friends_points_sum",df.filter(F.col('Id').isin(F.col('Friends_Ids'))
.agg(F.sum('points')).collect()[0][0])

得到TypeError: 'Column' object is not callable

我也尝试过类似udf的

def sum_agg(array, df):
id = array[0]
friend_ids = array[1]
points = array[2]
s = df.filter(F.col(id).isin(friend_id)).agg(F.sum('points')).collect()[0][0]
return s.tolist()
points_sum = F.udf(qnty_agg, IntegerType())
df.withColumn("friends_points_sum", qnty_sum(F.array('id','Friend_Ids','Points'), df))

但它不接受df作为自变量

您可能希望首先explodefriend_ids列,然后自加入数据帧以查找的值,最后聚合值

df = (df
.selectExpr('id', 'points', 'explode(friend_ids) fi')
.join(df.selectExpr('id i', 'points pts'), F.col('fi') == F.col('i'), 'inner')
.groupby('id', 'points')
.agg(F.collect_list('fi').alias('friend_ids'), F.sum('pts').alias('friend_points_sum')))
df.show()
+---+------+----------+-----------------+
| id|points|friend_ids|friend_points_sum|
+---+------+----------+-----------------+
|  3|     2|    [2, 4]|               18|
|  1|     5|    [5, 2]|                7|
|  5|     1|    [1, 4]|               17|
|  4|    12| [5, 2, 3]|                9|
|  2|     6| [1, 3, 4]|               19|
+---+------+----------+-----------------+

最新更新