为什么读取一个字节20x比读取2、3、4、…慢..文件中的字节



我一直在努力理解readseek之间的权衡。对于小的"跳跃",读取不需要的数据比使用seek跳过数据更快。

当对不同的读取/查找块大小进行计时以找到临界点时,我遇到了一个奇怪的现象:read(1)read(2)read(3)等慢大约20倍。对于不同的读取方法,例如read()readinto(),这种效果是相同的。

为什么会出现这种情况?

在计时结果中搜索以下2/3行的方式:

2 x buffered 1 byte readinto bytearray

环境:

Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul  5 2016, 11:45:57) [MSC v.1900 32 bit (Intel)]

计时结果:

Non-cachable binary data ingestion (file object blk_size = 8192):
- 2 x buffered 0 byte readinto bytearray:
robust mean: 6.01 µs +/- 377 ns
min: 3.59 µs
- Buffered 0 byte seek followed by 0 byte readinto:
robust mean: 9.31 µs +/- 506 ns
min: 6.16 µs
- 2 x buffered 4 byte readinto bytearray:
robust mean: 14.4 µs +/- 6.82 µs
min: 2.57 µs
- 2 x buffered 7 byte readinto bytearray:
robust mean: 14.5 µs +/- 6.76 µs
min: 3.08 µs
- 2 x buffered 2 byte readinto bytearray:
robust mean: 14.5 µs +/- 6.77 µs
min: 3.08 µs
- 2 x buffered 5 byte readinto bytearray:
robust mean: 14.5 µs +/- 6.76 µs
min: 3.08 µs
- 2 x buffered 3 byte readinto bytearray:
robust mean: 14.5 µs +/- 6.73 µs
min: 2.57 µs
- 2 x buffered 49 byte readinto bytearray:
robust mean: 14.5 µs +/- 6.72 µs
min: 2.57 µs
- 2 x buffered 6 byte readinto bytearray:
robust mean: 14.6 µs +/- 6.76 µs
min: 3.08 µs
- 2 x buffered 343 byte readinto bytearray:
robust mean: 15.3 µs +/- 6.43 µs
min: 3.08 µs
- 2 x buffered 2401 byte readinto bytearray:
robust mean: 138 µs +/- 247 µs
min: 4.11 µs
- Buffered 7 byte seek followed by 7 byte readinto:
robust mean: 278 µs +/- 333 µs
min: 15.4 µs
- Buffered 3 byte seek followed by 3 byte readinto:
robust mean: 279 µs +/- 333 µs
min: 14.9 µs
- Buffered 1 byte seek followed by 1 byte readinto:
robust mean: 279 µs +/- 334 µs
min: 15.4 µs
- Buffered 2 byte seek followed by 2 byte readinto:
robust mean: 279 µs +/- 334 µs
min: 15.4 µs
- Buffered 4 byte seek followed by 4 byte readinto:
robust mean: 279 µs +/- 334 µs
min: 15.4 µs
- Buffered 49 byte seek followed by 49 byte readinto:
robust mean: 281 µs +/- 336 µs
min: 14.9 µs
- Buffered 6 byte seek followed by 6 byte readinto:
robust mean: 281 µs +/- 337 µs
min: 15.4 µs
- 2 x buffered 1 byte readinto bytearray:
robust mean: 282 µs +/- 334 µs
min: 17.5 µs
- Buffered 5 byte seek followed by 5 byte readinto:
robust mean: 282 µs +/- 338 µs
min: 15.4 µs
- Buffered 343 byte seek followed by 343 byte readinto:
robust mean: 283 µs +/- 340 µs
min: 15.4 µs
- Buffered 2401 byte seek followed by 2401 byte readinto:
robust mean: 309 µs +/- 373 µs
min: 15.4 µs
- Buffered 16807 byte seek followed by 16807 byte readinto:
robust mean: 325 µs +/- 423 µs
min: 15.4 µs
- 2 x buffered 16807 byte readinto bytearray:
robust mean: 457 µs +/- 558 µs
min: 16.9 µs
- Buffered 117649 byte seek followed by 117649 byte readinto:
robust mean: 851 µs +/- 1.08 ms
min: 15.9 µs
- 2 x buffered 117649 byte readinto bytearray:
robust mean: 1.29 ms +/- 1.63 ms
min: 18 µs

基准代码:

from _utils import BenchmarkResults
from timeit import timeit, repeat
import gc
import os
from contextlib import suppress
from math import floor
from random import randint
### Configuration
FILE_NAME = 'test.bin'
r = 5000
n = 100
reps = 1
chunk_sizes = list(range(7)) + [7**x for x in range(1,7)]
results = BenchmarkResults(description = 'Non-cachable binary data ingestion')

### Setup
FILE_SIZE = int(100e6)
# remove left over test file
with suppress(FileNotFoundError):
os.unlink(FILE_NAME)
# determine how large a file needs to be to not fit in memory
gc.collect()
try:
while True:
data = bytearray(FILE_SIZE)
del data
FILE_SIZE *= 2
gc.collect()
except MemoryError:
FILE_SIZE *= 2
print('Using file with {} GB'.format(FILE_SIZE / 1024**3))
# check enough data in file
required_size = sum(chunk_sizes)*2*2*reps*r
print('File size used: {} GB'.format(required_size / 1024**3))
assert required_size <= FILE_SIZE

# create test file
with open(FILE_NAME, 'wb') as file:
buffer_size = int(10e6)
data = bytearray(buffer_size)
for i in range(int(FILE_SIZE / buffer_size)):
file.write(data)
# read file once to try to force it into system cache as much as possible
from io import DEFAULT_BUFFER_SIZE
buffer_size = 10*DEFAULT_BUFFER_SIZE
buffer = bytearray(buffer_size)
with open(FILE_NAME, 'rb') as file:
bytes_read = True
while bytes_read:
bytes_read = file.readinto(buffer)
blk_size = file.raw._blksize
results.description += ' (file object blk_size = {})'.format(blk_size)
file = open(FILE_NAME, 'rb')
### Benchmarks
setup = 
"""
# random seek to avoid advantageous starting position biasing results
file.seek(randint(0, file.raw._blksize), 1)
"""
read_read = 
"""
file.read(chunk_size)
file.read(chunk_size)
"""
seek_seek = 
"""
file.seek(buffer_size, 1)
file.seek(buffer_size, 1)
"""
seek_read = 
"""
file.seek(buffer_size, 1)
file.read(chunk_size)
"""
read_read_timings = {}
seek_seek_timings = {}
seek_read_timings = {}
for chunk_size in chunk_sizes:
read_read_timings[chunk_size] = []
seek_seek_timings[chunk_size] = []
seek_read_timings[chunk_size] = []
for j in range(r):
#file.seek(0)
for chunk_size in chunk_sizes:
buffer = bytearray(chunk_size)
read_read_timings[chunk_size].append(timeit(read_read, setup, number=reps, globals=globals()))
#seek_seek_timings[chunk_size].append(timeit(seek_seek, setup, number=reps, globals=globals()))
seek_read_timings[chunk_size].append(timeit(seek_read, setup, number=reps, globals=globals()))
for chunk_size in chunk_sizes:
results['2 x buffered {} byte readinto bytearray'.format(chunk_size)] = read_read_timings[chunk_size]
#results['2 x buffered {} byte seek'.format(chunk_size)] = seek_seek_timings[chunk_size]
results['Buffered {} byte seek followed by {} byte readinto'.format(chunk_size, chunk_size)] = seek_read_timings[chunk_size]

### Cleanup
file.close()
os.unlink(FILE_NAME)
results.show()
results.save()

编辑2020-02-24:

@finefoot请求_utils包能够运行上述代码。

from collections import OrderedDict
from math import ceil
from statistics import mean, stdev
from contextlib import suppress
import os
import inspect
class BenchmarkResults(OrderedDict):
def __init__(self, *args, description='Benchmark Description', **kwArgs):
self.description = description
return super(BenchmarkResults, self).__init__(*args, **kwArgs)
def __repr__(self):
"""Shows the results for the benchmarks in order of ascending duration"""
characteristic_durations = []
for name, timings in self.items():
try:
characteristic_durations.append(_robust_stats(timings)[0])
except ValueError:
if len(timings) > 1:
characteristic_durations.append(mean(timings))
else:
characteristic_durations.append(timings[0])
indx = _argsort(characteristic_durations)
repr = '{}:n'.format(self.description)
items = list(self.items())
for i in indx:
name, timings = items[i]
repr += '- {}:n'.format(name)
try:
stats = _robust_stats(timings)
repr += '      robust mean: {} +/- {}n'.format(_units(stats[0]), _units(stats[1]))
except ValueError:
repr += '      timings: {}n'.format(', '.join(map(_units, timings)))
if len(timings) > 1:
repr += '      min: {}n'.format(_units(min(timings)))
return repr
def show(self):
print(self)
def save(self):
caller = inspect.stack()[1]
filename = os.path.splitext(caller.filename)[0] + '.log'
with open(filename, 'w') as logfile:
logfile.write(repr(self))

def _units(seconds, significant_figures=3):
fmt = '{:.%sg} {}' % significant_figures
if seconds > 1:
return fmt.format(seconds, 's')
elif seconds > 1e-3:
return fmt.format(seconds*1e3, 'ms')
elif seconds > 1e-6:
return fmt.format(seconds*1e6, 'µs')
elif seconds < 1e-6:
return fmt.format(seconds*1e9, 'ns')
elif seconds > 60:
return fmt.format(seconds/60, 'min')
else:
return fmt.format(seconds/3600, 'hrs')
raise ValueError()
def _robust_stats(timings, fraction_to_use=0.8):
if len(timings) < 5:
raise ValueError('To calculate a robust mean, you need at least 5 timing results')
elts_to_prune = int(len(timings) * (1 - fraction_to_use))
# prune at least the highest and the lowest result
elts_to_prune = elts_to_prune if elts_to_prune > 2 else 2
# round to even number --> symmetic pruning
offset = ceil(elts_to_prune / 2)
# sort the timings
timings.sort()
# prune the required fraction of the elements
timings = timings[offset:-offset]
return mean(timings), stdev(timings)
def _argsort(seq):
# http://stackoverflow.com/questions/3071415/efficient-method-to-calculate-the-rank-vector-of-a-list-in-python
return sorted(range(len(seq)), key=seq.__getitem__)
if __name__ == '__main__':
pass

我能够用您的代码重现这个问题。然而,我注意到以下问题:如果更换,你能验证问题是否消失吗

file.seek(randint(0, file.raw._blksize), 1)

带有

file.seek(randint(0, file.raw._blksize), 0)

setup中?我认为在读取1字节的过程中,您可能会在某个时刻耗尽数据。读取2个字节、3个字节等不会有任何数据可供读取,所以速度要快得多。

从文件句柄逐字节读取通常比块读取慢。

通常,每个read()调用都对应于Python中的一个C read(()调用。总的结果是一个系统调用请求下一个字符。对于2kb的文件,这意味着对内核进行2000次调用;每一个都需要一个函数调用,向内核发出请求,然后等待响应,通过返回传递响应。

这里最值得注意的是awaiting response,系统调用将阻塞,直到您的调用在队列中得到确认,所以您必须等待。

调用越少越好,所以字节越多越快;这就是为什么缓冲io被广泛使用的原因。

在python中,缓冲可以由io.BufferedReader提供,也可以通过文件的open上的buffering关键字参数提供

在处理arduinos与EEPROM的接口时,我也看到过类似的情况。基本上,为了从芯片或数据结构中写入或读取,您必须发送一个写入/读取启用命令,发送一个起始位置,然后获取第一个字符。然而,如果您获取多个字节,大多数芯片将自动递增其目标地址寄存器。因此,启动读/写操作会有一些开销。这是之间的区别

  • 开始通信
  • 发送读取启用
  • 发送读取命令
  • 发送地址1
  • 从目标1获取数据
  • 结束通信
  • 启动通信
  • 发送读取启用
  • 发送读取命令
  • 发送地址2
  • 从目标2获取数据
  • 结束通信

  • 开始通信
  • 发送读取启用
  • 发送读取命令
  • 发送地址1
  • 从目标1获取数据
  • 从目标2获取数据
  • 结束通信

只是,就机器指令而言,一次读取多个位/字节会清除大量开销。更糟糕的是,在发送读/写使能后,一些芯片需要你空闲几个时钟周期,以让机械过程将晶体管物理移动到位,从而实现读或写。

相关内容

  • 没有找到相关文章

最新更新