Sklearn CountVectorizer 按类获取最常出现的特征



我正在使用 CountVectorizer 将 obs 数组转换为 documentXfeature 矩阵,其中每个文档都是 ~50 个不同类之一。对于每个类,我希望看到最常出现的功能。

cv = CountVectorizer(binary=True, tokenizer=lambda x: x.split())
# document X feature sparse matrix
vectored_sites = cv.fit_transform([' '.join([f for f in generator_features(site)]) for site in sites])
# list of classes
document_classes = [site.class for site in sites]
# how to select rows from vectored_sites for each class
class_i_document_features = ??
# compute frequency of each column in class_i_document_features
feature_counts = class_i_document_features.sum(axis=0)
feature_frequencies = feature_counts/class_i_document_features.size()[0]
# print something like (feature1: frequency1, feature2: frequency2 ...}

我在过滤到单个类然后将频率格式化为清晰结果时遇到问题。

我会像这样将它们分组:

from collections import defaultdict
# how to select rows from vectored_sites for each class
class_i_document_features = defaultdict(list)
for site, vector in zip(sites, vectored_sites):
    class_i_document_features[site.class].append(vector)

这就是你尝试做的吗?

看起来我在 0.11.0 版大纲中遇到了 scipy 稀疏矩阵不支持掩码索引的问题:使用布尔掩码切片 scipy 稀疏矩阵。鉴于我切换到基于元素索引的索引并在下面发布了解决方案

# this would be in a for loop by class name
class_i = 'someclass'
class_i_indexes = [i for i in xrange(len(sites)) if sites[i].class == class_i]
for word, total in sorted(zip(cv.get_feature_names(),
                              np.asarray(vectored_sites[class_i_indexes].sum(axis=0)).ravel()),
                          key=lambda x: -x[1])[:10]:
    print '%s: %d %f' % (word, total, float(total)/len(class_i_indexes))

相关内容

  • 没有找到相关文章