pyspark-逐行读取df以搜索另一个df



我是pyspark的新手,需要帮助在df中搜索
我有学生数据的df1,如下

+---------+----------+--------------------+
|studentid|   course |  registration_date |
+---------+----------+--------------------+
|      348|         2|     15-11-2021     |
|      567|         1|     05-11-2021     |
|      595|         3|     15-10-2021     |
|      580|         2|     06-11-2021     |
|      448|         4|     15-09-2021     |
+---------+----------+--------------------+

df2.有关于注册期的信息,如下

+--------+------------+------------+
| period | start_date |  end_date  |
+--------+------------+------------+
|       1| 01-09-2021 | 15-09-2021 |
|       2| 16-09-2021 | 30-09-2021 |
|       3| 01-10-2021 | 15-10-2021 |
|       4| 16-10-2021 | 31-10-2021 |
|       5| 01-11-2021 | 15-11-2021 |
|       6| 16-11-2021 | 30-11-2021 |
+--------+------------+------------+

我需要逐行iter df1,获得学生注册日期,并使用此日期,转到df2,获得条件为df2.start_date<=的周期信息df1.registration_date<=df2.end_date。
结果将是新的df,如下所示

+---------+----------+--------------------+--------+------------+------------+
|studentid|   course |  registration_date | period | start_date |  end_date  |
+---------+----------+--------------------+--------+------------+------------+
|      348|         2|     15-11-2021     |       5| 01-11-2021 | 15-11-2021 |
|      567|         1|     05-11-2021     |       5| 01-11-2021 | 15-11-2021 |
|      595|         3|     15-10-2021     |       3| 01-10-2021 | 15-10-2021 |
|      580|         2|     06-11-2021     |       5| 01-11-2021 | 15-11-2021 |
|      448|         4|     15-09-2021     |       1| 01-09-2021 | 15-09-2021 |
+---------+----------+--------------------+--------+------------+------------+

您可以将join条件指定为复杂条件。

工作示例

from datetime import datetime
from pyspark.sql import functions as F

df = spark.createDataFrame([
(348, 2, datetime.strptime("15-11-2021", "%d-%m-%Y")),
(567, 1, datetime.strptime("05-11-2021", "%d-%m-%Y")),
(595, 3, datetime.strptime("15-10-2021", "%d-%m-%Y")),
(580, 2, datetime.strptime("06-11-2021", "%d-%m-%Y")),
(448, 4, datetime.strptime("15-09-2021", "%d-%m-%Y")),]
, ("studentid", "course", "registration_date",)).withColumn("registration_date", F.to_date(F.col("registration_date")))
df2 = spark.createDataFrame([
(1, datetime.strptime("01-09-2021", "%d-%m-%Y"), datetime.strptime("15-09-2021", "%d-%m-%Y")),
(2, datetime.strptime("16-09-2021", "%d-%m-%Y"), datetime.strptime("30-09-2021", "%d-%m-%Y")),
(3, datetime.strptime("01-10-2021", "%d-%m-%Y"), datetime.strptime("15-10-2021", "%d-%m-%Y")),
(4, datetime.strptime("16-10-2021", "%d-%m-%Y"), datetime.strptime("31-10-2021", "%d-%m-%Y")),
(5, datetime.strptime("01-11-2021", "%d-%m-%Y"), datetime.strptime("15-11-2021", "%d-%m-%Y")),
(6, datetime.strptime("16-11-2021", "%d-%m-%Y"), datetime.strptime("30-11-2021", "%d-%m-%Y")),]
, ("period", "start_date", "end_date")).withColumn("start_date", F.to_date(F.col("start_date"))).withColumn("end_date", F.to_date(F.col("end_date")))
df.join(df2, (df2["start_date"] <= df["registration_date"]) & (df["registration_date"] <= df2["end_date"])).show()

输出

+---------+------+-----------------+------+----------+----------+
|studentid|course|registration_date|period|start_date|  end_date|
+---------+------+-----------------+------+----------+----------+
|      348|     2|       2021-11-15|     5|2021-11-01|2021-11-15|
|      567|     1|       2021-11-05|     5|2021-11-01|2021-11-15|
|      595|     3|       2021-10-15|     3|2021-10-01|2021-10-15|
|      448|     4|       2021-09-15|     1|2021-09-01|2021-09-15|
|      580|     2|       2021-11-06|     5|2021-11-01|2021-11-15|
+---------+------+-----------------+------+----------+----------+

最新更新