我认为函数 TfidfVectorizer 没有正确计算 IDF 因子。例如,使用 sklearn.feature_extraction.text.TfidfVectorizer 从 tf-idf 特征权重复制代码:
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = ["This is very strange",
"This is very nice"]
vectorizer = TfidfVectorizer(
use_idf=True, # utiliza o idf como peso, fazendo tf*idf
norm=None, # normaliza os vetores
smooth_idf=False, #soma 1 ao N e ao ni => idf = ln(N+1 / ni+1)
sublinear_tf=False, #tf = 1+ln(tf)
binary=False,
min_df=1, max_df=1.0, max_features=None,
strip_accents='unicode', # retira os acentos
ngram_range=(1,1), preprocessor=None, stop_words=None, tokenizer=None, vocabulary=None
)
X = vectorizer.fit_transform(corpus)
idf = vectorizer.idf_
print dict(zip(vectorizer.get_feature_names(), idf))
输出为:
{u'is': 1.0,
u'nice': 1.6931471805599454,
u'strange': 1.6931471805599454,
u'this': 1.0,
u'very': 1.0}`
但应该是:
{u'is': 0.0,
u'nice': 0.6931471805599454,
u'strange': 0.6931471805599454,
u'this': 0.0,
u'very': 0.0}
不是吗?我做错了什么?
而根据 http://www.tfidf.com/,IDF 的计算是:
IDF(t) = log_e(Total number of documents / Number of documents with term t in it)
因此,当术语"this"、"is"和"very"出现在两个句子中时,IDF = log_e (2/2) = 0。
术语"奇怪"和"不错"只出现在两个文档中的一个中,因此 log_e(2/1) = 0,69314。
在 sklearn 暗示中,有两件事你可能没有想到:
TfidfTransformer
smooth_idf=True
作为默认参数- 它总是增加 1 的重量
所以它正在使用:
idf = log( 1 + samples/documents) + 1
这是在源代码中:
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/feature_extraction/text.py#L987-L992
编辑:您可以像这样对标准类TfidfVectorizer
类进行子类化:
import scipy.sparse as sp
import numpy as np
from sklearn.feature_extraction.text import (TfidfVectorizer,
_document_frequency)
class PriscillasTfidfVectorizer(TfidfVectorizer):
def fit(self, X, y=None):
"""Learn the idf vector (global term weights)
Parameters
----------
X : sparse matrix, [n_samples, n_features]
a matrix of term/token counts
"""
if not sp.issparse(X):
X = sp.csc_matrix(X)
if self.use_idf:
n_samples, n_features = X.shape
df = _document_frequency(X)
# perform idf smoothing if required
df += int(self.smooth_idf)
n_samples += int(self.smooth_idf)
# log+1 instead of log makes sure terms with zero idf don't get
# suppressed entirely.
####### + 1 is commented out ##########################
idf = np.log(float(n_samples) / df) #+ 1.0
#######################################################
self._idf_diag = sp.spdiags(idf,
diags=0, m=n_features, n=n_features)
return self
他们在计算 idf 时使用的实际公式(当 smooth_idf 为 True 时)是
idf = log( (1 + samples)/(documents + 1)) + 1
它在源代码中,但我认为 Web 文档对此有点模棱两可。
https://github.com/scikit-learn/scikit-learn/blob/14031f6/sklearn/feature_extraction/text.py#L966-L969