我有大量的新闻文章数据集加载到Pyspark DataFrame中。我有兴趣将数据框架过滤到包含其身体文本中某些感兴趣词的文章集。目前,关键字列表很小,但是我想将它们存储在数据框架中,因为该列表将来可能会扩展。考虑以下小例子:
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
article_df = [{'source': 'a', 'body': 'Seattle is in Washington.'},
{'source': 'b', 'body': 'Los Angeles is in California'},
{'source': 'a', 'body': 'Banana is a fruit'}]
article_data = spark.createDataFrame(article_data)
keyword_data = [{'city': 'Seattle', 'state': 'Washington'},
{'city': 'Los Angeles', 'state': 'California'}]
keyword_df = spark.createDataFrame(keyword_data)
这为我们提供了以下数据框:
+--------------------+------+
| body|source|
+--------------------+------+
|Seattle is in Was...| a|
|Los Angeles is in...| b|
| Banana is a fruit| a|
+--------------------+------+
+-----------+----------+
| city| state|
+-----------+----------+
| Seattle|Washington|
|Los Angeles|California|
+-----------+----------+
作为第一次通过,我想过滤article_df
,以便它仅包含其body
字符串包含keyword_df['city']
中任何字符串的文章。我还想将其过滤为包含keyword_df['city']
的字符串和keyword_df['state']
中的相应条目(同一行)的文章。我该如何完成?
我已经通过手动定义的关键字列表来完成此操作:
from pyspark.sql.functions import udf
from pyspark.sql.types import BooleanType
def city_filter(x):
cities = ['Seattle', 'Los Angeles']
x = x.lower()
return any(s.lower() in x for s in cities)
filterUDF = udf(city_filter, BooleanType())
然后article_df.filter(filterUDF(article_df.body)).show()
给出了所需的结果:
+--------------------+------+
| body|source|
+--------------------+------+
|Seattle is in Was...| a|
|Los Angeles is in...| b|
+--------------------+------+
如何实现此过滤器,而不必手动定义关键字列表(或关键字对的元组)?我应该为此使用UDF吗?
您可以使用leftsemi
与自定义表达式一起实现它,例如:
body_contains_city = expr('body like concat("%", city, "%")')
article_df.join(keyword_df, body_contains_city, 'leftsemi').show()