Python多处理文件读取



我有一个关键字列表,我想验证这些关键字中是否有任何一个在包含100000多个域名的文件中。为了加快处理速度,我希望实现多处理,以便可以并行验证每个关键字。

我的代码似乎工作不太好,因为单次处理要快得多。怎么了?:(

import time
from multiprocessing import Pool

def multiprocessing_func(keyword):
# File containing more than 100k domain names
# URL: https://raw.githubusercontent.com/CERT-MZ/projects/master/Domain-squatting/domain-names.txt
file_domains = open("domain-names.txt", "r")
for domain in file_domains:
if keyword in domain:
print("similar domain identified:", domain)

# Rewind the file, start from the begining
file_domains.seek(0)

if __name__ == '__main__':
starttime = time.time()
# Keywords to check
keywords = ["google","facebook", "amazon", "microsoft", "netflix"]
# Create a multiprocessing Pool
pool = Pool()  
for keyword in keywords:
print("Checking keyword:", keyword)

# Without multiprocessing pool
#multiprocessing_func(keyword)

# With multiprocessing pool
pool.map(multiprocessing_func, keyword)
# Total run time
print('That took {} seconds'.format(time.time() - starttime))

想想为什么这个程序:

import multiprocessing as mp
def work(keyword):
print("working on", repr(keyword))
if __name__ == "__main__":
with mp.Pool(4) as pool:
pool.map(work, "google")

打印

working on 'g'
working on 'o'
working on 'o'
working on 'g'
working on 'l'
working on 'e'

map()在序列上工作,字符串就是序列。与其将map()调用固定在循环中,您可能只想用keywords(整个列表(作为第二个参数来调用它一次。

相关内容

  • 没有找到相关文章

最新更新