PySpark - 将多行客户"compressing"到一行中,删除空白



所以我目前有一个数据帧,看起来如下:

+-------------+----------------+---------------+------------------+-----------------+
| customer_id | init_base_date | init_end_date | reinit_base_date | reinit_end_date |
+-------------+----------------+---------------+------------------+-----------------+
| ...         |                |               |                  |                 |
| A           | 2015-07-30     |               |                  |                 |
| A           |                | 2016-07-24    |                  |                 |
| B           | 2015-07-10     |               |                  |                 |
| B           |                | 2015-10-05    |                  |                 |
| B           |                |               | 2016-01-09       |                 |
| B           |                |               |                  | 2016-07-04      |
| C           | 2015-05-13     |               |                  |                 |
| C           |                | 2015-08-09    |                  |                 |
| ...         |                |               |                  |                 |
+-------------+----------------+---------------+------------------+-----------------+

我真的需要把它转换成

+-------------+----------------+---------------+------------------+-----------------+
| customer_id | init_base_date | init_end_date | reinit_base_date | reinit_end_date |
+-------------+----------------+---------------+------------------+-----------------+
| ...         |                |               |                  |                 |
| A           | 2015-07-30     | 2016-07-24    |                  |                 |
| B           | 2015-07-10     | 2015-10-05    | 2016-01-09       | 2016-07-04      |
| C           | 2015-05-13     | 2015-08-09    |                  |                 |
| ...         |                |               |                  |                 |
+-------------+----------------+---------------+------------------+-----------------+

我能想出几种非常乏味的方法来完成上面的工作,但我想知道是否有一种快速有效的方法(也许是使用windows?我现在只使用PySpark一个月了,所以肯定还是个新手(。

如果显示的那些空单元格实际上是nulls(而不是空字符串(,则可以在groupBy中使用pyspark.sql.functions.first()作为聚合函数。关键是将first()ignorenulls参数设置为True(默认为False(。

import pyspark.sql.functions as f
cols = [c for c in df.columns if c != 'customer_id']
df.groupBy('customer_id').agg(*[f.first(c, True).alias(c) for c in cols]).show()
#+-----------+--------------+-------------+----------------+---------------+
#|customer_id|init_base_date|init_end_date|reinit_base_date|reinit_end_date|
#+-----------+--------------+-------------+----------------+---------------+
#|          A|    2015-07-30|   2016-07-24|            null|           null|
#|          B|    2015-07-10|   2015-10-05|      2016-01-09|     2016-07-04|
#|          C|    2015-05-13|   2015-08-09|            null|           null|
#+-----------+--------------+-------------+----------------+---------------+

如果这些空值实际上是空字符串,则可以先用null替换所有空字符串,然后按照上面的方法操作。然后可以(可选(将null值替换为空格。

from functools import reduce  # for python3
cols = [c for c in df.columns if c != 'customer_id']
df = reduce(lambda df, c: df.withColumn(c, f.when(f.col(c) != '', f.col(c))), cols, df)
df = df.groupBy('customer_id').agg(*[f.first(c, True).alias(c) for c in cols])
df.na.fill('').show()  # fill nulls with blanks
#+-----------+--------------+-------------+----------------+---------------+
#|customer_id|init_base_date|init_end_date|reinit_base_date|reinit_end_date|
#+-----------+--------------+-------------+----------------+---------------+
#|          A|    2015-07-30|   2016-07-24|                |               |
#|          B|    2015-07-10|   2015-10-05|      2016-01-09|     2016-07-04|
#|          C|    2015-05-13|   2015-08-09|                |               |
#+-----------+--------------+-------------+----------------+---------------+

最新更新