Python中的并行递归函数



如何在Python中并行化递归函数?

我的函数是这样的:

def f(x, depth):
    if x==0:
        return ...
    else :
        return [x] + map(lambda x:f(x, depth-1), list_of_values(x))
def list_of_values(x):
    # Heavy compute, pure function

当尝试与multiprocessing.Pool.map并行时,Windows打开无限数量的进程并挂起。

什么是一个好的(最好是简单的)方法来并行化它(对于单个多核机器)?

下面是挂起的代码:
from multiprocessing import Pool
pool = pool(processes=4)
def f(x, depth):
    if x==0:
        return ...
    else :
        return [x] + pool.map(lambda x:f(x, depth-1), list_of_values(x))
def list_of_values(x):
    # Heavy compute, pure function

好的,很抱歉给您带来的问题。

我将回答一个稍微不同的问题,其中f()返回列表中值的总和。这是因为从您的示例中我不清楚f()的返回类型是什么,并且使用整数使代码易于理解。

这很复杂,因为有两件不同的事情同时发生:

  1. 池中昂贵函数的计算
  2. f() 的递归展开

我非常小心地只使用池来计算昂贵的函数。这样我们就不会发生"爆炸"。但是因为这是异步的,所以我们需要延迟一个批次的回调工作,一旦昂贵的函数完成,worker调用回调工作。

不仅如此,我们还需要使用倒计时锁存器,以便我们知道对f()的所有单独的子调用何时完成。

可能有一个更简单的方法(我很确定有,但我需要做其他事情),但也许这给了你一个可能的想法:

from multiprocessing import Pool, Value, RawArray, RLock
from time import sleep
class Latch:
    '''A countdown latch that lets us wait for a job of "n" parts'''
    def __init__(self, n):
        self.__counter = Value('i', n)
        self.__lock = RLock()
    def decrement(self):
        with self.__lock:
            self.__counter.value -= 1
            print('dec', self.read())
        return self.read() == 0
    def read(self):
        with self.__lock:
            return self.__counter.value
    def join(self):
        while self.read():
            sleep(1)

def list_of_values(x):
    '''An expensive function'''
    print(x, ': thinking...')
    sleep(1)
    print(x, ': thought')
    return list(range(x))

pool = Pool()

def async_f(x, on_complete=None):
    '''Return the sum of the values in the expensive list'''
    if x == 0:
        on_complete(0) # no list, return 0
    else:
        n = x # need to know size of result beforehand
        latch = Latch(n) # wait for n entires to be calculated
        result = RawArray('i', n+1) # where we will assemble the map
        def delayed_map(values):
            '''This is the callback for the pool async process - it runs
               in a separate thread within this process once the
               expensive list has been calculated and orchestrates the
               mapping of f over the result.'''
            result[0] = x # first value in list is x
            for (v, i) in enumerate(values):
                def callback(fx, i=i):
                    '''This is the callback passed to f() and is called when
                       the function completes.  If it is the last of all the
                       calls in the map then it calls on_complete() (ie another
                       instance of this function) for the calling f().'''
                    result[i+1] = fx
                    if latch.decrement(): # have completed list
                        # at this point result contains [x]+map(f, ...)
                        on_complete(sum(result)) # so return sum
                async_f(v, callback)
        # Ask worker to generate list then call delayed_map
        pool.apply_async(list_of_values, [x], callback=delayed_map)

def run():
    '''Tie into the same mechanism as above, for the final value.'''
    result = Value('i')
    latch = Latch(1)
    def final_callback(value):
        result.value = value
        latch.decrement()
    async_f(6, final_callback)
    latch.join() # wait for everything to complete
    return result.value

print(run())

PS:我使用的是Python 3.2,上面的丑陋是因为我们延迟了最终结果的计算(回到树),直到稍后。可能像生成器或期货这样的东西可以简化事情。

而且,我怀疑您需要一个缓存,以避免在使用与前面相同的参数调用时不必要地重新计算昂贵的函数。

参见yaniv的答案——这似乎是通过显式地表示深度来颠倒求值顺序的另一种方法。

经过思考,我找到了一个简单,不完整,但足够好的答案:

# A partially parallel solution. Just do the first level of recursion in parallel. It might be enough work to fill all cores.
import multiprocessing
def f_helper(data):
     return f(x=data['x'],depth=data['depth'], recursion_depth=data['recursion_depth'])
def f(x, depth, recursion_depth):
    if depth==0:
        return ...
    else :
        if recursion_depth == 0:
            pool = multiprocessing.Pool(processes=4)
            result = [x] + pool.map(f_helper, [{'x':_x, 'depth':depth-1,  'recursion_depth':recursion_depth+1 } _x in list_of_values(x)])
            pool.close()
        else:
            result = [x] + map(f_helper, [{'x':_x, 'depth':depth-1, 'recursion_depth':recursion_depth+1 } _x in list_of_values(x)])

        return result
def list_of_values(x):
    # Heavy compute, pure function

我最初存储主进程id并将其传递给子程序。

当我需要启动一个多进程作业时,我检查主进程的子进程数。如果它小于或等于我的CPU计数的一半,那么我将其作为并行运行。如果它大于我的CPU计数的一半,那么我就运行串行。这样可以避免瓶颈,有效地利用CPU内核。您可以根据自己的情况调整内核的数量。例如,您可以将其设置为CPU内核的确切数量,但不应该超过它。

def subProgramhWrapper(func, args):
    func(*args)
parent = psutil.Process(main_process_id)
children = parent.children(recursive=True)
num_cores = int(multiprocessing.cpu_count()/2)
if num_cores >= len(children):
    #parallel run
    pool = MyPool(num_cores)
    results = pool.starmap(subProgram, input_params)
    pool.close()
    pool.join()
else:
    #serial run
    for input_param in input_params:
        subProgramhWrapper(subProgram, input_param)

最新更新