Pyspark查找最接近的文本


我是pyspark的新用户。我想比较来自两个不同数据帧(包含新闻信息(的文本以供推荐。

我可以用Python很容易地做到这一点:

def get_recommendations(title, cosine_sim, indices):
idx = indices[title]
# Get the pairwsie similarity scores
sim_scores = list(enumerate(cosine_sim[idx]))
print(sim_scores)
# Sort the movies based on the similarity scores
sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
# Get the scores for 10 most similar movies
sim_scores = sim_scores[1:11]
talk_indices = [i[0] for i in sim_scores]
# Return the top 10 most
return ted['News Data'].iloc[talk_indices]
indices = pd.Series(det.index, index=det['Unnamed: 0']).drop_duplicates()
transcripts = det['News Data']
transcripts2 = ted['News Data']
tfidf = TfidfVectorizer(stop_words='english')
tfidf_matrix = tfidf.fit_transform(transcripts)
tfidf_matrixx = tfidf.transform(transcripts2)
cosine_sim = linear_kernel(tfidf_matrix, tfidf_matrixx)
print(get_recommendations(0, cosine_sim, indices))

当我切换到pyspark时,在计算TF-IDF时,我得到了非常不同的结果。我知道余弦相似性的计算需要使";"新闻";建议。

我在Pyspark中使用以下内容来计算tfidf:

df1 = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('bbcclear.csv')
df2 = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('yenisafakcategorypredict.csv')
# tokenize
tokenizer = Tokenizer().setInputCol("News Data").setOutputCol("word")
wordsData = tokenizer.transform(df2)
wordsData2 = tokenizer.transform(df1)
# vectorize
vectorizer = CountVectorizer(inputCol='word', outputCol='vectorizer').fit(wordsData)
wordsData = vectorizer.transform(wordsData)
wordsData2 = vectorizer.transform(wordsData2)
# calculate scores
idf = IDF(inputCol="vectorizer", outputCol="tfidf_features")
idf_model = idf.fit(wordsData)
wordsData = idf_model.transform(wordsData)
idf_model = idf.fit(wordsData2)
wordsData2 = idf_model.transform(wordsData2)

如何使用上面获得的ID-IDF来计算余弦相似度以进行推荐?

下面是我的PoC任务中TF-IDF在Spark中的使用示例。我强烈建议使用高级的NLP框架,如BERT,而不是TF-IDF,以获得有意义的相似性。

样本数据集:

df = spark.createDataFrame(
[
["cricket sport team player"],
["global politics"],
["football sport player team"],
],
["news"]
)
+--------------------------+
|news                      |
+--------------------------+
|cricket sport team player |
|global politics           |
|football sport player team|
+--------------------------+

TF-IDF矢量化和余弦相似性计算:

from pyspark.ml.feature import RegexTokenizer, CountVectorizer, IDF
from pyspark.ml import Pipeline
regex_tokenizer = RegexTokenizer(gaps=False, pattern="w+", inputCol="news", outputCol="tokens")
count_vectorizer = CountVectorizer(inputCol="tokens", outputCol="tf")
idf = IDF(inputCol="tf", outputCol="idf")
tf_idf_pipeline = Pipeline(stages=[regex_tokenizer, count_vectorizer, idf])
df = tf_idf_pipeline.fit(df).transform(df).drop("news", "tokens", "tf")
df = df.crossJoin(df.withColumnRenamed("idf", "idf2"))
@F.udf(returnType=FloatType())
def cos_sim(u, v):
return float(u.dot(v) / (u.norm(2) * v.norm(2)))
# 
df.withColumn("cos_sim", cos_sim(F.col("idf"), F.col("idf2")))
+--------------------+--------------------+----------+
|                 idf|                idf2|   cos_sim|
+--------------------+--------------------+----------+
|(7,[0,1,2,4],[0.2...|(7,[0,1,2,4],[0.2...|       1.0|
|(7,[0,1,2,4],[0.2...|(7,[5,6],[0.69314...|       0.0|
|(7,[0,1,2,4],[0.2...|(7,[0,1,2,3],[0.2...|0.34070355|
|(7,[5,6],[0.69314...|(7,[0,1,2,4],[0.2...|       0.0|
|(7,[5,6],[0.69314...|(7,[5,6],[0.69314...|       1.0|
|(7,[5,6],[0.69314...|(7,[0,1,2,3],[0.2...|       0.0|
|(7,[0,1,2,3],[0.2...|(7,[0,1,2,4],[0.2...|0.34070355|
|(7,[0,1,2,3],[0.2...|(7,[5,6],[0.69314...|       0.0|
|(7,[0,1,2,3],[0.2...|(7,[0,1,2,3],[0.2...|       1.0|
+--------------------+--------------------+----------+

相关内容

  • 没有找到相关文章

最新更新