使用Gensim不起作用的Python LSI



我正在尝试根据主题行对电子邮件进行分类,我必须获得LSI才能培训分类器。我正在获得TF-IDF,并进一步尝试获得LSI模型。但是,它根本不做任何处理/写入任何文件。我的代码如下:

#reading the list of subjects for features
f = open('subject1000.csv','rb')
f500 = open('subject500.csv','wb')
with open('subject1000.csv') as myfile:
    head=list(islice(myfile,500))#only 500 subjects for training
for h in head:
    f500.write(h)
    #print h
f500.close()    
texts = (line.lower().split() for line in head) #creating texts of subjects
dictionary = corpora.Dictionary(texts) #all the words used to create dictionary
dictionary.compactify()
print dictionary #checkpoint - 2215 unique tokens -- 2215 unique words to 1418 for 500 topics
#corpus streaming 
class MyCorpus(object):
    def __iter__(self):
        for line in open('subject500.csv','rb'): #supposed to be one document per line -- open('subject1000.csv','rb')
            yield dictionary.doc2bow(line.lower().split())  #every line - converted to bag-of-words format = list of (token_id, token_count) 2-tuples          
print 'corpus created'
corpus = MyCorpus() # object created
for vector in corpus:
    print vector
tfidf = models.TfidfModel(corpus)
corpus_tfidf= tfidf[corpus]  #re-initialize the corpus according to the model to get the normalized frequencies.
corpora.MmCorpus.serialize('subject500-tfidf', corpus_tfidf)  #store to disk for later use
print 'TFIDF complete!' #check - till here its ok
lsi300 = models.LsiModel(corpus_tfidf, num_topics=300, id2word=dictionary) #using the trained corpus to use LSI indexing
corpus_lsi300 = lsi300[corpus_tfidf]
print corpus_lsi300 #checkpoint
lsi300.print_topics(10,5) #checks
corpora.BleiCorpus.serialize('subjects500-lsi-300', corpus_lsi300)

我得到输出,直到" tfidf完成!"但是,该程序不会为LSI返回任何内容。我正在浏览上述500条主题行。对可能出错的任何想法将不胜感激!谢谢。

记录的数据如下:

INFO:gensim.corpora.dictionary:adding document #0 to Dictionary(0 unique tokens)
INFO:gensim.corpora.dictionary:built Dictionary(1418 unique tokens) from 500 documents (total 3109 corpus positions)
DEBUG:gensim.corpora.dictionary:rebuilding dictionary, shrinking gaps
INFO:gensim.models.tfidfmodel:collecting document frequencies
INFO:gensim.models.tfidfmodel:PROGRESS: processing document #0
INFO:gensim.models.tfidfmodel:calculating IDF weights for 500 documents and 1418 features (3081 matrix non-zeros)
INFO:gensim.corpora.mmcorpus:storing corpus in Matrix Market format to subject500-tfidf
INFO:gensim.matutils:saving sparse matrix to subject500-tfidf
INFO:gensim.matutils:PROGRESS: saving document #0
INFO:gensim.matutils:saved 500x1418 matrix, density=0.435% (3081/709000)
DEBUG:gensim.matutils:closing subject500-tfidf
DEBUG:gensim.matutils:closing subject500-tfidf
INFO:gensim.corpora.indexedcorpus:saving MmCorpus index to subject500-tfidf.index
INFO:gensim.models.lsimodel:using serial LSI version on this node
INFO:gensim.models.lsimodel:updating model with new documents
INFO:gensim.models.lsimodel:preparing a new chunk of documents
DEBUG:gensim.models.lsimodel:converting corpus to csc format
INFO:gensim.models.lsimodel:using 100 extra samples and 2 power iterations
INFO:gensim.models.lsimodel:1st phase: constructing (1418, 400) action matrix
INFO:gensim.models.lsimodel:orthonormalizing (1418, 400) action matrix
DEBUG:gensim.matutils:computing QR of (1418, 400) dense matrix
DEBUG:gensim.models.lsimodel:running 2 power iterations
DEBUG:gensim.matutils:computing QR of (1418, 400) dense matrix
DEBUG:gensim.matutils:computing QR of (1418, 400) dense matrix
INFO:gensim.models.lsimodel:2nd phase: running dense svd on (400, 500) matrix

添加记录
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)

并在此处粘贴日志或要点链接。

我在通过Gensim教程时遇到了同样的问题。我尝试使用2000个文档的样本语料库转换为LSI。python在"跑步密集SVD"步骤上的Windows错误消息" Python停止工作"崩溃。它可以与小型语料库一起工作。这个问题似乎是使用Win32的当前二进制文件的Scipy安装。安装Anaconda(包括Numpy和Scipy)的Anaconda(Python分布)消失了。

我在本周早些时候遇到了一个类似的问题,我的模型正正确加载,但是打印主题无能为力。我发现它可能是print_topics()行为的错误 - 如果您在命令行中运行它,它将静音其输出,而如果您在ipython中运行或明确循环遍历以进行打印的主题,则应该看到您的结果。

最新更新