IndexError:在 sklearn 中使用 NMF 越界



我正在尝试从数据语料库创建主题模型。该代码能够正确使用 NMF 从解析的数据生成任务数的主题,但是当语料库长度 = 20 时它会中断,如下所示

20
[u'bell', u'closed', u'day', u'drinks', u'enjoy', u'food', u'good', u'great', u'll', u'new', u'nice', u'original', u'people', u'phoenix', u'place', u'rd', u'reopened', u'terrific', u'try', u'weekly']
Traceback (most recent call last):
  File "sklearnTfidf.py", line 238, in <module>
    trainTest()
  File "sklearnTfidf.py", line 185, in trainTest
    posDic += buildDictionary(pos_reviews)
  File "sklearnTfidf.py", line 143, in buildDictionary
    sortedDict = buildTFIDF(review)
  File "sklearnTfidf.py", line 110, in buildTFIDF
    nmf = NMF(n_components=no_topics, random_state=1, init='nndsvd').fit(tfidf)
  File "/opt/anaconda/lib/python2.7/site-packages/sklearn/decomposition/nmf.py", line 551, in fit
    self.fit_transform(X, **params)
  File "/opt/anaconda/lib/python2.7/site-packages/sklearn/decomposition/nmf.py", line 485, in fit_transform
    W, H = self._init(X)
  File "/opt/anaconda/lib/python2.7/site-packages/sklearn/decomposition/nmf.py", line 395, in _init
    W, H = _initialize_nmf(X, self.n_components_)
  File "/opt/anaconda/lib/python2.7/site-packages/sklearn/decomposition/nmf.py", line 116, in _initialize_nmf
    x, y = U[:, j], V[j, :]
IndexError: index 1 is out of bounds for axis 1 with size 1

我仍然熟悉 sklearn 工具集,所以我接受这可能是我的简单疏忽,因为大部分代码都是从各种示例中粘合在一起的。

# Create a dictionary of words from review
def buildDictionary(review) :
    buildTFIDF(review)
    #[unrelated code]

# Extract topic models from corpus
def buildTFIDF(corpus) :
    no_topics = 5
    no_features = 100
    no_top_words = 10
    tfidf_vectorizer = TfidfVectorizer(min_df=1, max_df=1.0, max_features=no_features, stop_words='english')
    tfidf = tfidf_vectorizer.fit_transform(corpus)
    tfidf_feature_names = tfidf_vectorizer.get_feature_names()
    print tfidf.getnnz()        # sanity checking
    print tfidf_feature_names   # sanity checking
    nmf = NMF(n_components=no_topics, random_state=1, init='nndsvd').fit(tfidf)
    display_topics(nmf, tfidf_feature_names, no_top_words)
    print ''

# Prints no_top_words for each feature
def display_topics(model, feature_names, no_top_words):
    for topic_idx, topic in enumerate(model.components_):
        print "Topic %d:" %(topic_idx)
        print " ".join([feature_names[i]
                for i in topic.argsort()[:-no_top_words - 1:-1]])

究竟是什么导致了这个索引错误,我该如何纠正它?

我建议你看看这个答案:https://stackoverflow.com/a/43336816/8187340。问题是decomposition.NMF(n_components)参数的值。此参数必须等于或小于语料库的数量。
示例:如果dtm.shape返回 (6, 6030),则no_topics <= 6

相关内容

  • 没有找到相关文章

最新更新