当我读取没有多进程的文件时,需要 0.16 秒,但当我使用多进程时,需要 0.36 秒。 为什么使用多进程比使用单线程花费更长的时间?
下面的代码是我想读取一个文件并将文件拆分为 10 并比较行
无多进程的代码
import time
result = []
def get_match(lines, num):
outer_lines = lines[:num]
inner_lines = lines[1:]
for f1 in outer_lines:
# print('f1', f1)
for f2 in inner_lines:
result.append(f1)
result.append(f2)
# print('f2', f2)
# print('compare file line by line')
# print('store int into global result variable')
if __name__ == '__main__':
atime = time.time()
split_n = 10
with open('10000.txt', 'r') as file:
line1 = file.readlines()
line2 = line1[split_n:]
line3 = line2[split_n:]
line4 = line3[split_n:]
line5 = line4[split_n:]
line6 = line5[split_n:]
line7 = line6[split_n:]
line8 = line7[split_n:]
line9 = line8[split_n:]
line10 = line9[split_n:]
t1 = get_match(line1,split_n,)
t2 = get_match(line2,split_n,)
t3 = get_match(line3,split_n,)
t4 = get_match(line4,split_n,)
t5 = get_match(line5,split_n,)
t6 = get_match(line6,split_n,)
t7 = get_match(line7,split_n,)
t8 = get_match(line8,split_n,)
t9 = get_match(line9,split_n,)
t10 = get_match(line10,split_n,)
btime = time.time()
print(btime-atime)
多进程代码
from multiprocessing import Process
import time
result = []
def get_match(lines, num):
outer_lines = lines[:num]
inner_lines = lines[1:]
for f1 in outer_lines:
for f2 in inner_lines:
result.append(f1)
result.append(f2)
if __name__ == '__main__':
atime = time.time()
split_n = 10
with open('test.txt', 'r') as file:
line1 = file.readlines()
line2 = line1[split_n:]
line3 = line2[split_n:]
line4 = line3[split_n:]
line5 = line4[split_n:]
line6 = line5[split_n:]
line7 = line6[split_n:]
line8 = line7[split_n:]
line9 = line8[split_n:]
line10 = line9[split_n:]
p1 = Process(target=get_match, args=(line1, split_n, ))
p1.start()
p2 = Process(target=get_match, args=(line2, split_n,))
p2.start()
p3 = Process(target=get_match, args=(line3, split_n,))
p3.start()
p4 = Process(target=get_match, args=(line4, split_n,))
p4.start()
p5 = Process(target=get_match, args=(line5, split_n,))
p5.start()
p6 = Process(target=get_match, args=(line6, split_n,))
p6.start()
p7 = Process(target=get_match, args=(line7, split_n,))
p7.start()
p8 = Process(target=get_match, args=(line8, split_n,))
p8.start()
p9 = Process(target=get_match, args=(line9, split_n,))
p9.start()
p10 = Process(target=get_match, args=(line10, split_n,))
p10.start()
procs = [p1,p2,p3,p4,p5,p6,p7,p8,p9,p10]
# complete the processes
for proc in procs:
proc.join()
btime = time.time()
print(btime-atime)
使用进程并不意味着您正在进行多处理。 编写代码的方式是,你只是打开一个进程,等待它返回,然后继续打开下一个进程。 在第二个示例中,您正在执行与第一个示例中相同的操作,但每次打开和关闭新进程都会增加开销。
如果你想要实际的多处理(即所有事情同时完成(,我建议使用map
. 请参阅此处的文档: https://docs.python.org/2/library/multiprocessing.html
创建和销毁进程会消耗许多资源,如CPU,内存和时间,如果数据不是那么大,使用多进程是不好的。我建议使用一些科学计算机软件包,例如numpy和scipy。