我想读取大型二进制文件,并在6个字节的块中拆分。例如,现在我可以在82秒内读取1GB二进制文件,但是它很慢。达到最大速度的最佳方法是什么?
请注意,我无法使用struct
。因为我选择的块不是2(6个字节(的功率。
with open(file, "rb") as infile:
data_arr = []
start = time()
while True:
data = infile.read(6)
if not data: break
data_arr.append(data)
您有几个不同的选择。您的主要问题是,由于块的尺寸很小(6个字节(,在收集大块和垃圾收集方面有很多高架。
有两种主要方法可以解决:
-
将整个文件加载到内存中,然后将其分成块。这是最快的方法,但是您的文件越大,您将开始运行内存。
-
一次将一个块加载到内存中,对其进行处理,然后继续进入下一个块。总体上这不是更快的速度,但是可以节省时间,因为您不需要等待整个文件开始处理。
-
实验1的组合。这是观看者的练习,因为将需要大量的实验才能达到可以快速,正确工作的代码。
一些代码与时间比较:
import timeit
def read_original(filename):
with open(filename, "rb") as infile:
data_arr = []
while True:
data = infile.read(6)
if not data:
break
data_arr.append(data)
return data_arr
# the bigger the file, the more likely this is to cause python to crash
def read_better(filename):
with open(filename, "rb") as infile:
# read everything into memory at once
data = infile.read()
# separate string into 6-byte chunks
data_arr = [data[i:i+6] for i in range(0, len(data), 6)]
return data_arr
# no faster than the original, but allows you to work on each piece without loading the whole into memory
def read_iter(filename):
with open(filename, "rb") as infile:
data = infile.read(6)
while data:
yield data
data = infile.read(6)
def main():
# 93.8688215 s
tm = timeit.timeit(stmt="read_original('test/oraociei12.dll')", setup="from __main__ import read_original", number=10)
print(tm)
# 85.69337399999999 s
tm = timeit.timeit(stmt="read_better('test/oraociei12.dll')", setup="from __main__ import read_better", number=10)
print(tm)
# 103.0508528 s
tm = timeit.timeit(stmt="[x for x in read_iter('test/oraociei12.dll')]", setup="from __main__ import read_iter", number=10)
print(tm)
if __name__ == '__main__':
main()
这种方式要快得多。
import sys
from functools import partial
SIX = 6
MULTIPLIER = 30000
SIX_COUNT = SIX * MULTIPLIER
def do(data):
for chunk in iter(partial(data.read, SIX_COUNT), b""):
six_list = [ chunk[i:i+SIX] for i in range(0, len(chunk), SIX) ]
if __name__ == "__main__":
args = sys.argv[1:]
for arg in args:
with open(arg,'rb') as data:
do(data)