CountVectorizer-忽略出现次数少于x次且字符数少于y的单词



我想知道是否有任何方法可以让CountVectorizer()忽略所有文档中出现次数少于x次且字符数少于y的单词。类似于RDocumentTermMatrix(tm(中的wordlengthbounds参数。

示例

这个语料库:

corpus = [
'This is the first document.',
'This document is the second document.',
'And this is the third one.',
'Is this the first document?',
]

现在变成这样:

>>> vectorizer = CountVectorizer()
>>> X = vectorizer.fit_transform(corpus)
>>> print(vectorizer.get_feature_names())
['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this']
>>> print(X.toarray())
[[0 1 1 1 0 0 1 0 1]
[0 2 0 1 0 1 1 0 1]
[1 0 0 1 1 0 1 1 1]
[0 1 1 1 0 0 1 0 1]]

将x和y设置为2,我想要这个:

>>> vectorizer = CountVectorizer()
>>> X = vectorizer.fit_transform(corpus)
>>> print(vectorizer.get_feature_names())
['and', 'document', 'first', 'one', 'second', 'the', 'third', 'this']
>>> print(X.toarray())
[[1 1 1 1]
[2 0 1 1]
[0 0 1 1]
[1 1 1 1]]

您可能希望:

  • 设置将处理xmin_df=2
  • 定义将处理ytoken_pattern=r"(?u)b[a-zA-Z]{3,}b"(您可以尝试token_pattern=r"(?u)b[a-zA-Z0-9_]{3,}b"在令牌定义中包含数字和下划线(

演示:

from sklearn.feature_extraction.text import CountVectorizer
corpus = [
"This is the first document.",
"This document is the second document.",
"And this is the third one.",
"Is this the first document?",
]
vectorizer = CountVectorizer(min_df=2, token_pattern=r"(?u)b[a-zA-Z]{3,}b")
X = vectorizer.fit_transform(corpus)
print(X.toarray())

[[1 1 1 1]
[2 0 1 1]
[0 0 1 1]
[1 1 1 1]]

最新更新