所以我使用 scipy 的 fftconvolve 过滤大图像,我想并行化我为单个图像所做的不同过滤。对于并行化,我想使用joblib。 但是,我被我有 2 个结果困扰:
- 使用多处理后端时,任务要慢得多(慢 1.5 倍)
- 使用多线程后端,任务更快(快 25%)
我对这两个结果感到惊讶,因为我相信卷积是 CPU 密集型的。
这是我在 jupyter 笔记本中用于计算运行时的代码:
from joblib import Parallel, delayed
import numpy as np
from scipy.signal import fftconvolve
im_size = (512, 512)
filter_size = tuple(s-1 for s in im_size)
n_filters = 3
image = np.random.rand(*im_size)
filters = [np.random.rand(*filter_size) for i in range(n_filters)]
%%timeit
s = np.sum(
Parallel(n_jobs=n_filters, backend='multiprocessing')(
delayed(fftconvolve)(image, f) for f in filters
)
)
283 ms ± 12.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
s = np.sum(
Parallel(n_jobs=n_filters, backend='threading')(
delayed(fftconvolve)(image, f) for f in filters
)
)
142 ms ± 15.9 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%%timeit
s = np.sum([fftconvolve(image, f) for f in filters])
198 ms ± 2.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
我还尝试了不同的事情,例如将图像放在memmap中,或者减少预先调度的作业,但没有任何根本改变结果。
当多线程时,为什么多处理不能加快计算速度?
基准测试并行处理的问题在于,您必须正确考虑代码中产生的开销才能得出正确的结论。使用并行处理时,开销有 3 个来源:
-
生成线程或进程:这是每次调用
Parallel
时都会完成的操作,除非您依赖于托管Parallel
对象(具有with
上下文)或使用loky
后端。有关更多信息,请参阅此处。 -
在新的解释器中导入模块:对于依赖于新进程的后端(当启动方法未
fork
时),需要重新导入所有模块。这可能会导致开销。
进程 之间的通信:使用进程时(因此不使用
backend=threading
),您需要将数组传达给每个工作线程。通信可能会减慢计算速度,特别是对于具有大输入(如fftconvolve
)的短任务。
如果您的目标是大量调用此函数,则应修改基准测试以实际消除生成Parallel
对象的工作线程的成本,方法是使用托管Parallel
对象或依赖此功能进行backend=loky
。并避免由于加载模块而导致的开销:
from joblib import Parallel, delayed
import numpy as np
from scipy.signal import fftconvolve
from time import time, sleep
def start_processes(im, filter, mode=None, delay=0):
sleep(delay)
return im if im is not None else 0
def time_parallel(name, parallel, image, filters, n_rep=50):
print(80*"=" + "n" + name + "n" + 80*"=")
# Time to start the pool of workers and initialize the processes
# With this first call, the processes/threads are actually started
# and further calls will not incure this overhead anymore
t0 = time()
np.sum(parallel(
delayed(start_processes)(image, f, mode='valid') for f in filters)
)
print(f"Pool init overhead: {(time() - t0) / 1e-3:.3f}ms")
# Time the overhead due to loading of the scipy module
# With this call, the scipy.signal module is loaded in the child
# processes. This import can take up to 200ms for fresh interpreter.
# This overhead is only present for the `loky` backend. For the
# `multiprocessing` backend, as the processes are started with `fork`,
# they already have a loaded scipy module. For the `threading` backend
# and the iterative run, there no need to re-import the module so this
# overhead is non-existent
t0 = time()
np.sum(parallel(
delayed(fftconvolve)(image, f, mode='valid') for f in filters)
)
print(f"Library load overhead: {(time() - t0) / 1e-3:.3f}ms")
# Average the runtime on multiple run, once the external overhead have
# been taken into account.
times = []
for _ in range(n_rep):
t0 = time()
np.sum(parallel(
delayed(fftconvolve)(image, f, mode='valid') for f in filters
))
times.append(time() - t0)
print(f"Runtime without init overhead: {np.mean(times) / 1e-3:.3f}ms,"
f" (+-{np.std(times) / 1e-3:.3f}ms)n")
# Setup the problem size
im_size = (512, 512)
filter_size = tuple(5 for s in im_size)
n_filters = 3
n_jobs = 3
n_rep = 50
# Generate random data
image = np.random.rand(*im_size)
filters = np.random.rand(n_filters, *filter_size)
# Time the `backend='multiprocessing'`
with Parallel(n_jobs=n_jobs, backend='multiprocessing') as parallel:
time_parallel("Multiprocessing", parallel, image, filters, n_rep=n_rep)
sleep(.5)
# Time the `backend='threading'`
with Parallel(n_jobs=n_jobs, backend='threading') as parallel:
time_parallel("Threading", parallel, image, filters, n_rep=n_rep)
sleep(.5)
# Time the `backend='loky'`.
# For this backend, there is no need to rely on a managed `Parallel` object
# as loky reuses the previously created pool by default. We will thus mimique
# the creation of a new `Parallel` object for each repetition
def parallel_loky(it):
Parallel(n_jobs=n_jobs)(it)
time_parallel("Loky", parallel_loky, image, filters, n_rep=n_rep)
sleep(.5)
# Time the iterative run.
# We rely on the SequentialBackend of joblib which is used whenever `n_jobs=1`
# to allow using the same function. This should not change the computation
# much.
def parallel_iterative(it):
Parallel(n_jobs=1)(it)
time_parallel("Iterative", parallel_iterative, image, filters, n_rep=n_rep)
$ python main.py
================================================================================
Multiprocessing
================================================================================
Pool init overhead: 12.112ms
Library load overhead: 96.520ms
Runtime without init overhead: 77.548ms (+-16.119ms)
================================================================================
Threading
================================================================================
Pool init overhead: 11.887ms
Library load overhead: 76.858ms
Runtime without init overhead: 31.931ms (+-3.569ms)
================================================================================
Loky
================================================================================
Pool init overhead: 502.369ms
Library load overhead: 245.368ms
Runtime without init overhead: 44.808ms (+-4.074ms)
================================================================================
Iterative
================================================================================
Pool init overhead: 1.048ms
Library load overhead: 92.595ms
Runtime without init overhead: 47.749ms (+-4.081ms)
有了这个基准测试,你可以看到一旦你启动了loky
后端,使用它实际上会更快。但是,如果您不多次使用它,则开销太大。