我正在阅读数百个HDF文件中,并分别处理每个HDF的数据。但是,这需要大量时间,因为它一次在一个HDF文件上工作。我只是偶然发现了http://docs.python.org/library/multiprocessing.html,现在想知道我如何使用Multiprocessing加速事物。
到目前为止,我想到了:
import numpy as np
from multiprocessing import Pool
def myhdf(date):
ii = dates.index(date)
year = date[0:4]
month = date[4:6]
day = date[6:8]
rootdir = 'data/mydata/'
filename = 'no2track'+year+month+day
records = read_my_hdf(rootdir,filename)
if records.size:
results[ii] = np.mean(records)
dates = ['20080105','20080106','20080107','20080108','20080109']
results = np.zeros(len(dates))
pool = Pool(len(dates))
pool.map(myhdf,dates)
但是,这显然是不正确的。您能跟随我的思想链吗?我需要更改?
尝试友好的 multiprocessing
包装器:
from joblib import Parallel, delayed
def myhdf(date):
# do work
return np.mean(records)
results = Parallel(n_jobs=-1)(delayed(myhdf)(d) for d in dates)
池类类映射功能就像标准的Python库map
功能,您可以保证您以将它们放入的顺序取回结果。知道这是唯一的其他技巧是您需要以一致的方式返回结果,然后过滤它们。
import numpy as np
from multiprocessing import Pool
def myhdf(date):
year = date[0:4]
month = date[4:6]
day = date[6:8]
rootdir = 'data/mydata/'
filename = 'no2track'+year+month+day
records = read_my_hdf(rootdir,filename)
if records.size:
return np.mean(records)
dates = ['20080105','20080106','20080107','20080108','20080109']
pool = Pool(len(dates))
results = pool.map(myhdf,dates)
results = [ result for result in results if result ]
results = np.array(results)
如果您确实想要结果,则可以使用imap_unordered