计算R中文档收集的余弦和Jaccard相似性



我将计算近14,000个文档之间的相似性。但是代码花费太多时间进行执行。还有其他方法可以更快地进行相同的工作吗?

这是我的代码

wb=createWorkbook() #create workbook
addWorksheet(wb,"absSim") #create worksheet
listoffiles=list.files() #get list of documents from current working directory
fileslength=length(listoffiles) #no of documents in directory
for(i in 1:fileslength-1)
{
  d1=readLines(listoffiles[i])# read first document 
  k=i+1
  for(j in k:fileslength)
  {
   d2=readLines(listoffiles[j]) #read second document
   #make a vector of two documents
   myvector=c(d1,d2)
   #making corpus of two documents
   mycorpus=Corpus(VectorSource(myvector))
   #preprocessing of corpus
   mycorpus=tm_map(mycorpus,removePunctuation)
   mycorpus=tm_map(mycorpus,removeNumbers)
   mycorpus=tm_map(mycorpus,stripWhitespace)
   mycorpus=tm_map(mycorpus,tolower)
   mycorpus=tm_map(mycorpus,function(x) removeWords(x,stopwords("english")))
   mycorpus=tm_map(mycorpus,function(x) removeWords(x,"x"))
   #make a document term matrix now
   dtm=as.matrix(DocumentTermMatrix(mycorpus))
   #compute distance of both documents using proxy package
   cdist=as.matrix(dist(dtm,method = "cosine"))
   jdist=as.matrix(dist(dtm,method = "jaccard"))
   #compute similarity
   csim=1-cdist
   jsim=1-jdist
   #get similarity of both documents
   cos=csim[1,2]
   jac=jsim[1,2]
   if(cos>0 | jac>0)
   {
     writeData(wb,"absSim",cos,startCol = 1,startRow = rownum)
     writeData(wb,"absSim",jac,startCol = 2,startRow = rownum)
     saveWorkbook(wb,"abstractSimilarity.xlsx",overwrite = TRUE)
     rownum=rownum+1
   }
  }
}

运行此代码时,第一个文档在2小时内执行。是否有任何想法可以更快地计算余弦和Jaccard相似性?

您可以尝试以下代码。这是一个非常简化的版本,没有任何清洁或修剪只是为了演示如何使用text2vec。我还使用了tokenizers软件包进行令牌化,因为它的速度比text2vec中的令牌更快。我使用Zach提供的采样函数作为此问题/答案。在我的机器上,它在不到一分钟的时间内完成。当然,可以采取其他相似性度量或预处理的整合。我希望这就是您要寻找的。

library(text2vec)
library(tokenizers)
samplefun <- function(n, x, collapse){
  paste(sample(x, n, replace=TRUE), collapse=collapse)
}
words <- sapply(rpois(10000, 8) + 1, samplefun, letters, '')
#14000 documents, each with 100 lines (pasted together) of several words
docs <- sapply(1:14000, function(x) {
  paste(sapply(rpois(100, 5) + 1, samplefun, words, ' '), collapse = ". ")
})
iterator <- itoken(docs,
                   ,tokenizer = function(x) tokenizers::tokenize_words(x, lowercase = FALSE)
                   ,progressbar = FALSE
                   )
vocabulary <- create_vocabulary(iterator)
dtm <- create_dtm(iterator, vocab_vectorizer(vocabulary))
#dtm
#14000 x 10000 sparse Matrix of class "dgCMatrix"
#....
#use, e.g., the first and second half of the dtm as document sets
similarity <- sim2(dtm[1:(nrow(dtm)/2),]
                   , dtm[(nrow(dtm)/2+1):nrow(dtm),]
                   , method = "jaccard"
                   , norm = "none")
dim(similarity)
#[1] 7000 7000

最新更新