Pyspark子查询/子查询连接使用数据帧



我希望根据该值以下最接近的匹配连接到一个值。在SQL中,我可以很容易地做到这一点。考虑以下数据:

tblActuals

|Date       |Temperature:
|09/02/2020 |14.1
|10/02/2020 |15.3
|11/02/2020 |12.2
|12/02/2020 |12.4
|13/02/2020 |12.5
|14/02/2020 |11
|15/02/2020 |14.6

tblCoefficients:

|Metric |Coefficient
|10.5   |0.997825593
|11     |0.997825593
|11.5   |0.997663198
|12     |0.997307614
|12.5   |0.996848773
|13     |0.996468537
|13.5   |0.99638519
|14     |0.996726301
|14.5   |0.997435894
|15     |0.998311153
|15.5   |0.999135509

在SQL中,我可以使用下面的命令来实现连接:

Select 
a.date, 
b.temperature, 
(select top 1 b.Coefficient from tblCoefficients b where b.Metric <= a.Temperature order by b.Metric desc) as coefficient 
from tblActuals

是否有任何方法可以在两个pyspark数据框架中实现与上述相同的数据?我可以在spark SQL中实现类似的结果,但我需要数据框架的灵活性,用于我在数据库块中创建的过程。

您可以执行连接并获得最大(最接近)度量的系数:

import pyspark.sql.functions as F
result = tblActuals.join(
tblCoefficients,
tblActuals['Temperature'] >= tblCoefficients['Metric']
).groupBy(tblActuals.columns).agg(
F.max(F.struct('Metric', 'Coefficient'))['Coefficient'].alias('coefficient')
)
result.show()
+----------+-----------+-----------+
|      Date|Temperature|coefficient|
+----------+-----------+-----------+
|15/02/2020|       14.6|0.997435894|
|12/02/2020|       12.4|0.997307614|
|14/02/2020|       11.0|0.997825593|
|13/02/2020|       12.5|0.996848773|
|11/02/2020|       12.2|0.997307614|
|10/02/2020|       15.3|0.998311153|
|09/02/2020|       14.1|0.996726301|
+----------+-----------+-----------+

最新更新