我有一个函数var
.我想知道通过利用系统拥有的所有处理器、内核和 RAM 内存进行多处理/并行处理,在此功能中快速运行 for 循环(对于多个坐标:xs 和 ys)的最佳方法。
可以使用Dask
模块吗?
可在此处找到pysheds
文档。
import numpy as np
from pysheds.grid import Grid
xs = 82.1206, 72.4542, 65.0431, 83.8056, 35.6744
ys = 25.2111, 17.9458, 13.8844, 10.0833, 24.8306
for (x,y) in zip(xs,ys):
grid = Grid.from_raster('E:/data.tif', data_name='map')
grid.catchment(data='map', x=x, y=y, out_name='catch', recursionlimit=1500, xytype='label')
....
....
results
我试图使用dask
在下面给出一个可重现的代码。您可以添加pysheds
的主处理部分或其中的任何其他函数,以便更快地并行迭代参数。
可在此处找到dask
模块的文档。
import dask
from dask import delayed, compute
from dask.distributed import Client, progress
from pysheds.grid import Grid
client = Client(threads_per_worker=2, n_workers=2) #Choose the number of workers and threads per worker over here to deploy for your task.
xs = 82.1206, 72.4542, 65.0431, 83.8056, 35.6744
ys = 25.2111, 17.9458, 13.8844, 10.0833, 24.8306
#Firstly, a function has to be created, where the iteration of the parameters is involved.
def var(x,y):
grid = Grid.from_raster('data.tif', data_name='map')
grid.catchment(data='map', x=x, y=y, out_name='catch', recursionlimit=1500, xytype='label')
...
...
return (result)
#Now calling the function in a 'dask' way.
lazy_results = []
for (x,y) in zip(xs,ys):
lazy_result = dask.delayed(var)(x,y)
lazy_results.append(lazy_result)
#Final command to execute the function var(x,y) and get the result.
dask.compute(*lazy_results)
您没有发布指向image1.tif
文件的链接,因此下面的示例代码使用 https://github.com/mdbartos/pysheds 中的pysheds/data/dem.tif
基本思想是将输入参数(xs
和ys
)拆分为子集,然后为每个 CPU 提供不同的子集来处理。
main()
计算两次解决方案,一次按顺序计算,一次并行计算,然后比较每个解决方案的解。 并行解决方案中存在一些效率低下的问题,因为图像文件将由每个 CPU 读取,因此有改进的余地(即,在并行部分之外读取图像文件,然后将生成的grid
对象提供给每个实例)。
import numpy as np
from pysheds.grid import Grid
from dask.distributed import Client
from dask import delayed, compute
xs = 10, 20, 30, 40, 50, 60, 70, 80, 90, 100
ys = 25, 35, 45, 55, 65, 75, 85, 95, 105, 115, 125
def var(image_file, x_in, y_in):
grid = Grid.from_raster(image_file, data_name='map')
variable_avg = []
for (x,y) in zip(x_in,y_in):
grid.catchment(data='map', x=x, y=y, out_name='catch')
variable = grid.view('catch', nodata=np.nan)
variable_avg.append( np.array(variable).mean() )
return(variable_avg)
def var_parallel(n_cpu, image_file, x_in, y_in):
tasks = []
for cpu in range(n_cpu):
x_in = xs[cpu::n_cpu] # eg, cpu = 0: x_in = (10, 40, 70, 100)
y_in = ys[cpu::n_cpu] #
tasks.append( delayed(var)(image_file, x_in, y_in) )
ans = compute(tasks)
# reassemble solution in the right order
par_avg = [None]*len(xs)
for cpu in range(n_cpu):
par_avg[cpu::n_cpu] = ans[0][cpu]
print('AVG (parallel) =',par_avg)
return par_avg
def main():
image_file = 'pysheds/data/dem.tif'
# sequential solution:
seq_avg = var(image_file, xs, ys)
print('AVG (sequential)=',seq_avg)
# parallel solution:
n_cpu = 3
dask_client = Client(n_workers=n_cpu)
par_avg = var_parallel(n_cpu, image_file, xs, ys)
dask_client.shutdown()
print('max error=',
max([ abs(seq_avg[i]-par_avg[i]) for i in range(len(seq_avg))]))
if __name__ == '__main__': main()