用于文本聚类分析的tf-idf



我想从数据帧中对列df['Texts']中包含的小文本进行分组。要分析的句子示例如下:

Texts
1 Donald Trump, Donald Trump news, Trump bleach, Trump injected bleach, bleach coronavirus.
2 Thank you Janey.......laughing so much at this........you have saved my sanity in these mad times. Only bleach Trump is using is on his heed 🤣
3 His more uncharitable critics said Trump had suggested that Americans drink bleach. Trump responded that he was being sarcastic.
4 Outcry after Trump suggests injecting disinfectant as treatment.
5 Trump Suggested 'Injecting' Disinfectant to Cure Coronavirus?
6 The study also showed that bleach and isopropyl alcohol killed the virus in saliva or respiratory fluids in a matter of minutes.

由于我知道TF-IDF对集群很有用,我一直在使用以下代码行(通过以下社区中以前的一些问题(:

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.cluster import KMeans
import re
import string
def preprocessing(line):
line = line.lower()
line = re.sub(r"[{}]".format(string.punctuation), " ", line)
return line
tfidf_vectorizer = TfidfVectorizer(preprocessor=preprocessing)
tfidf = tfidf_vectorizer.fit_transform(all_text)
kmeans = KMeans(n_clusters=2).fit(tfidf) # the number of clusters could be manually changed

然而,由于我正在考虑数据帧中的列,我不知道如何应用上面的函数。你能帮我吗?

def preprocessing(line):
line = line.lower()
line = re.sub(r"[{}]".format(string.punctuation), " ", line)
return line
tfidf_vectorizer = TfidfVectorizer(preprocessor=preprocessing)
tfidf = tfidf_vectorizer.fit_transform(df['Texts'])
kmeans = KMeans(n_clusters=2).fit(tfidf)

您只需要将all_text替换为df即可。最好先构建一个管道,然后同时应用矢量器和Kmeans。

为了获得更精确的结果,对文本进行更多的预处理从来都不是一个坏主意。此外,我不认为降低文本是一个好主意,因为你自然会删除一个好的写作风格特征(如果我们认为你想找到作者或将作者分配到一个小组(,但为了获得句子的情感,是的,最好降低。

最新更新