文档间相似性:余弦距离



更新的问题:

根据>" perimosocordiae" 解决方案,我发现了2个文档之间的余弦相似性。我试图使用该解决方案来找出2个文件之间的相似性。但是在test()中再次出现错误,这是

Traceback (most recent call last):
  File "3.py", line 103, in <module>
    main()
  File "3.py", line 99, in main
    test(tf_idf_matrix,count,nltkutil.cosine_distance)
  File "3.py", line 46, in test
    doc2 = np.asarray(tdMatrix[j-1].todense()).reshape(-1)
  File "/usr/lib/python2.7/dist-packages/scipy/sparse/csr.py", line 281, in __getitem__
    return self[key,:]                                #[i] or [1:2]
  File "/usr/lib/python2.7/dist-packages/scipy/sparse/csr.py", line 233, in __getitem__
    return self._get_row_slice(row, col)      #[i,1:2]
  File "/usr/lib/python2.7/dist-packages/scipy/sparse/csr.py", line 320, in _get_row_slice
    raise IndexError('index (%d) out of range' % i )
IndexError: index (4) out of range

我将一个文件用作火车集,另一个文件作为测试集,我的目标是使用test()功能使用TF-IDF在2个文件之间输出余弦相似性。

我的代码如下:

#! /usr/bin/python -tt
from __future__ import division
from operator import itemgetter
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
import nltk.cluster.util as nltkutil
import numpy as np
import re
def preprocess(fnin, fnout):
    fin = open(fnin, 'rb')
    print fin
    fout = open(fnout, 'wb')
    buf = []
    for line in fin:
        line = line.strip()
        if line.find("-- Document Separator --") > -1:
            if len(buf) > 0:
                    body = re.sub("s+", " ", " ".join(buf))
                    fout.write("%sn" % (body))
                rest = map(lambda x: x.strip(), line.split(": "))
                buf = []
            else:
                buf.append(line)
    fin.close()
    fout.close()
def test(tdMatrix,count,fsim):
    sims=[] 
    sims = np.zeros((len(tdMatrix.todense()), count))
    l=len(tdMatrix.todense())
    for i in range(0, l):
        for j in range(0, count):
                doc1 = np.asarray(tdMatrix[i].todense()).reshape(-1)
                doc2 = np.asarray(tdMatrix[j].todense()).reshape(-1)
                sims[i, j] = fsim(doc1, doc2)
        print sims

def main():
    file_set=["corpusA.txt","corpusB.txt"]
    train=[]
    test1=[]
    for file1 in file_set:
        s="x"+file1
        preprocess(file1,s)
    count_vectorizer = CountVectorizer()
    m=open("xcorpusA.txt",'r')
    for i in m:
        train.append(i.strip())
    #print doc
    #print train
    count_vectorizer.fit_transform(train)
    #print "Vocabulary:", count_vectorizer.vocabulary
    # Vocabulary: {'blue': 0, 'sun': 1, 'bright': 2, 'sky': 3}
    m1=open("xcorpusB.txt",'r')
    for i in m1:
        test1.append(i.strip())
    freq_term_matrix = count_vectorizer.transform(test1)
    #print freq_term_matrix.todense()
    tfidf = TfidfTransformer(norm="l2")
    tfidf.fit(freq_term_matrix)
    #print "IDF:", tfidf.idf_
    tf_idf_matrix = tfidf.transform(freq_term_matrix)
    print (tf_idf_matrix.toarray())
    count=0
    s=""
    for i in tf_idf_matrix.toarray():
        for j in i:
            count+=1    
        break
    #print count
    #print type(tf_idf_matrix)
    print "Results with Cosine Distance Similarity Measure"
    test(tf_idf_matrix,count,nltkutil.cosine_distance)

if __name__ == "__main__":
    main()

我正在寻找各自导师的建议。

您的错误在此表达式中:

tdMatrix[tdMatrix[i], :]

您的tdMatrix是浮点数的2x2数组,索引本身将失败。也许您的意思是:

doc1 = np.asarray(tdMatrix[i].todense()).reshape(-1)

相关内容

  • 没有找到相关文章

最新更新