我必须创建和填充巨大的() 96 Go, 72000行* 72000列)数组,每个浮点数来自数学公式。之后将计算该数组。
import itertools, operator, time, copy, os, sys
import numpy
from multiprocessing import Pool
def f2(x): # more complex mathematical formulas that change according to values in *i* and *x*
temp=[]
for i in combine:
temp.append(0.2*x[1]*i[1]/64.23)
return temp
def combinations_with_replacement_counts(n, r): #provide all combinations of r balls in n boxes
size = n + r - 1
for indices in itertools.combinations(range(size), n-1):
starts = [0] + [index+1 for index in indices]
stops = indices + (size,)
yield tuple(map(operator.sub, stops, starts))
global combine
combine = list(combinations_with_replacement_counts(3, 60)) #here putted 60 but need 350 instead
print len(combine)
if __name__ == '__main__':
t1=time.time()
pool = Pool() # start worker processes
results = [pool.apply_async(f2, (x,)) for x in combine]
roots = [r.get() for r in results]
print roots [0:3]
pool.close()
pool.join()
print time.time()-t1
- 创建和填充如此庞大的numpy数组的最快方法是什么?填充列表然后聚合然后转换成numpy数组?
- 我们可以并行计算知道的情况/列/行2d阵列是否独立于加速阵列的填充?使用Multiprocessing优化此类计算的线索/踪迹?
我知道您可以创建可以从不同线程更改的共享numpy数组(假设更改的区域不重叠)。这里是代码的草图,你可以用它来做(我看到的原始想法在stackoverflow的某个地方,编辑:这里是https://stackoverflow.com/a/5550156/1269140)
import multiprocessing as mp ,numpy as np, ctypes
def shared_zeros(n1, n2):
# create a 2D numpy array which can be then changed in different threads
shared_array_base = mp.Array(ctypes.c_double, n1 * n2)
shared_array = np.ctypeslib.as_array(shared_array_base.get_obj())
shared_array = shared_array.reshape(n1, n2)
return shared_array
class singleton:
arr = None
def dosomething(i):
# do something with singleton.arr
singleton.arr[i,:] = i
return i
def main():
singleton.arr=shared_zeros(1000,1000)
pool = mp.Pool(16)
pool.map(dosomething, range(1000))
if __name__=='__main__':
main()
您可以创建一个具有所需形状的空numpy.memmap
数组,然后使用multiprocessing.Pool
填充其值。正确地执行此操作还可以使池中每个进程的内存占用相对较小。