我有一个CountVectorizer:
word_vectorizer = CountVectorizer(stop_words=None, ngram_range=(2,2), analyzer='word')
实现该矢量化器:
X = word_vectorizer.fit_transform(group['cleanComments'])
引发此错误:
Traceback (most recent call last):
File "<ipython-input-63-d261e44b8cce>", line 1, in <module>
runfile('C:/Users/taca/Documents/Work/Python/Text Analytics/owccomments.py', wdir='C:/Users/taca/Documents/Work/Python/Text Analytics')
File "C:UserstacaAppDataLocalContinuumAnaconda3libsite-packagesspyderutilssitesitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "C:UserstacaAppDataLocalContinuumAnaconda3libsite-packagesspyderutilssitesitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/taca/Documents/Work/Python/Text Analytics/owccomments.py", line 38, in <module>
X = word_vectorizer.fit_transform(group['cleanComments'])
File "C:UserstacaAppDataLocalContinuumAnaconda3libsite-packagessklearnfeature_extractiontext.py", line 839, in fit_transform
self.fixed_vocabulary_)
File "C:UserstacaAppDataLocalContinuumAnaconda3libsite-packagessklearnfeature_extractiontext.py", line 781, in _count_vocab
raise ValueError("empty vocabulary; perhaps the documents only"
ValueError: empty vocabulary; perhaps the documents only contain stop words
当 nGram 从中提取的文档是以下字符串:"重复 q"时,会发生此错误。只要文档为"。
为什么 CountVectorizer 没有选择 q(或任何单个字母(作为有效单词?是否有任何综合的地方列出了此错误会因计数矢量化器而引发的可能原因?
编辑:我对错误本身进行了更多的挖掘,看起来它与词汇有关。我假设标准词汇不接受单个字母作为单词,但我不确定如何解决这个问题。
_count_vocab()
函数抛出此错误,这是 CountVectorizer
类的方法。该类带有一个token_pattern
,它定义了什么算作一个单词。 token_pattern
参数的文档指出:
默认正则表达式选择 2 个或更多字母数字字符的标记
我们可以在默认参数中明确看到这一点 __init__
:
token_pattern=r"(?u)bww+b"
如果要允许单字母单词,只需从此模式中删除第一个w
,并在实例化CountVectorizer
时显式设置token_pattern
:
CountVectorizer(token_pattern=r"(?u)bw+b",
stop_words=None, ngram_range=(2,2), analyzer='word')