pyspark字符串匹配-选择第一个匹配



我有两个表。

comment_df

| Date | Comment | 
|:---- |:------:| 
| 20/01/2020 | Transfer from Euro Account to HSBC account done on Monday but AMEX payment was on Tue. |
| 20/01/2020 | Brian initiated a Transfer from Euro Account to Natwest last Tuesday. |
| 21/01/2020 | AMEX payment to Natwest was delayed for second time in a row. |
| 21/01/2020 | AMEX receipts from Euro Account delayed. |

code_df

| Tag | Comment | 
|:---- |:------:| 
| EURO | Euro Account to HSBC |
| Natwest | Euro Account to Natwest |
| AMEX | AMEX payment |

所需的表格

| Date | Comment | Tag |
|:---- |:------:| ----:|
| 20/01/2020 | Transfer from Euro Account to HSBC account done on Monday but AMEX payment was on Tue.| EURO |
| 20/01/2020 | Brian initiated a Transfer from Euro Account to Natwest last Tuesday. | Natwest |
| 21/01/2020 | AMEX payment to Natwest was delayed for second time in a row. | AMEX | 
| 21/01/2020 | AMEX receipts from Euro Account delayed. | |

所以第一个注释有两个标签(欧元账户到汇丰银行和美国运通支付(,但我希望结果显示它遇到的第一个标签,而不是重复行。以下是之前的建议。

code_df = code_df.withColumnRenamed('Comment', 'Commentcode')
result = comment_df.join(code_df, comment_df.Comment.contains(code_df.Commentcode), 'left').drop('Commentcode')
result.show(truncate=False)
+----------+---------------------------------------------------------------------+-------+
|Date      |Comment                                                              |Tag    |
+----------+---------------------------------------------------------------------+-------+
|20/01/2020|Transfer from Euro Account to HSBC account done on Monday but AMEX payment was on Tue. |EURO|
|20/01/2020|Brian initiated a Transfer from Euro Account to Natwest last Tuesday.|Natwest|
|21/01/2020|AMEX payment to Natwest was delayed for second time in a row.        |AMEX|
|21/01/2020|AMEX receipts from Euro Account delayed.                             |null|
+----------+---------------------------------------------------------------------+-------+

您可以根据匹配字符串的位置对结果进行排序,并且只通过过滤位置来获得第一个匹配。

from pyspark.sql import functions as F, Window
result = comment_df.join(
code_df,
comment_df.Comment.contains(code_df.Commentcode),
'left'
).withColumn(
'rn',
F.row_number().over(
Window.partitionBy('Date', 'Comment')
.orderBy(F.expr('instr(Comment, Commentcode)'))
)
).filter('rn = 1')

最新更新