比较来自不同文件的单词



我是Python的新手,遇到了一个问题。我编写了代码来识别多个文件的总字数和唯一字数(在这种情况下,.txt文件是一本书的章节:来自 file1 的示例文本 "在什么时期,可变性的原因,无论它们是什么,通常都会起作用,一直存在争议;无论是在胚胎发育的早期或晚期,还是在受孕的那一刻。文件2中的示例文本"最后,因此,品种与物种具有相同的一般特征,因为它们无法与物种区分开来,除非首先通过发现中间连接形式"(。

我在网上找不到任何关于如何比较文件之间单词的示例。我需要确定文件之间共享的单词数以及每个文件(相对于其他文件(唯一的单词数。我的最终输出应包括 7 个数字:文件 1 和文件2 的总字数,文件 1 和文件 2 的唯一字数,文件 1 和文件2之间共享的单词 #,文件 1 中但不在文件 2 中的单词的 #,最后是文件 2 中但不在文件 1 中的单词 #。我知道我必须使用 set(( 来做到这一点,但我不明白怎么做。

import glob
from collections import Counter
path = "c-darwin-chapter-?.txt"
wordcount = {}
for filename in glob.glob(path):
with open("c-darwin-chapter-1.txt", 'r') as f1, open("c-darwin-chapter-2.txt", 'r') as f2:
f1_word_list = Counter(f1.read().replace(',','').replace('.','').replace("'",'').replace('!','').replace('&','').replace(';','').replace('(','').replace(')','').replace(':','').replace('?','').lower().split())
print("Total word count per file: ", sum(f1_word_list.values()))
print("Total unique word count: ", len(f1_word_list))
f2_word_list = Counter(f2.read().replace(',','').replace('.','').replace("'",'').replace('!','').replace('&','').replace(';','').replace('(','').replace(')','').replace(':','').replace('?','').lower().split())
print("Total word count per file: ", sum(f2_word_list.values()))
print("Total unique word count: ", len(f2_word_list))
#if/main commented out but final code must use if/main and loop
#if __name__ == '__main__':
#   main()

期望输出:

Total word count
Chapter1 = 11615
Chapter2 = 4837
Unique word count
Chapter1 = 1991
Chapter2 = 1025
Words in Chapter1 and Chapter2: 623
Words in Chapter1 not in Chapter2: 1368
Words in Chapter2 not in Chapter1: 402

您读取两个文件并将读取的文本转换为列表/集。使用集合,您可以使用集合运算符来计算它们之间的交点/差值:

s.intersection(t)    s & t    new set with elements common to s and t  
s.difference(t)      s - t    new set with elements in s but not in t

集合运算的解释表可以在这里找到: Doku 2.x/也适用于 3.7

演示:

file1 = "This is some text in some file that you can preprocess as you " +
"like. This is some text in some file that you can preprocess as you like."
file2 = "this is other text about animals and flowers and flowers and " +
"animals but not animal-flowers that has to be processed as well"
# split into list - no .lower().replace(...) - you solved that already
list_f1 = file1.split() 
list_f2 = file2.split()
# create sets from list (case sensitive)
set_f1 = set( list_f1 )
set_f2 = set( list_f2 )
print(f"Words: {len(list_f1)} vs {len(list_f2)} Unique {len(set_f1)} vs {len(set_f2)}.")
# difference
print(f"Only in 1: {set_f1-set_f2} [{len(set_f1-set_f2)}]")
# intersection
print(f"In both {set_f1&set_f2} [{len(set_f1&set_f2)}]")
# difference the other way round
print(f"Only in 2:{set_f2-set_f1} [{len(set_f2-set_f1)}]")

输出:

Words: 28 vs 22 Unique 12 vs 18.
Only in 1: {'like.', 'in', 'you', 'can', 'file', 'This', 'preprocess', 'some'} [8]
In both {'is', 'that', 'text', 'as'} [4]
Only in 2:{'animals', 'not', 'but', 'animal-flowers', 'to', 'processed',
'has', 'be', 'and', 'well', 'this', 'about', 'other', 'flowers'} [14]

您已经在处理文件读取并将"统一"为小写 - 我在这里省略了它。输出使用 python 3.6 的字符串插值语法:请参阅 PEP 498

相关内容

  • 没有找到相关文章

最新更新