如何使用Python多处理队列访问GPU(通过PyOpenCL)



我有一些代码需要很长时间才能运行,因此我一直在研究Python的多处理库,以加快速度。我的代码还有一些通过PyOpenCL使用GPU的步骤。问题是,如果我将多个进程设置为同时运行,它们最终都会试图同时使用GPU,这通常会导致一个或多个进程抛出异常并退出。

为了解决这个问题,我错开了每个过程的开始时间,这样它们就不太可能相互碰撞:

process_list = []
num_procs = 4
# break data into chunks so each process gets it's own chunk of the data
data_chunks = chunks(data,num_procs)
for chunk in data_chunks:
if len(chunk) == 0:
continue
# Instantiates the process
p = multiprocessing.Process(target=test, args=(arg1,arg2))
# Sticks the thread in a list so that it remains accessible
process_list.append(p)
# Start threads
j = 1
for process in process_list:
print('nStarting process %i' % j)
process.start()
time.sleep(5)
j += 1
for process in process_list:
process.join()

我还在调用GPU的函数周围封装了一个try-except循环,这样,如果两个进程同时尝试访问它,那么无法访问的进程将等待几秒钟,然后重试:

wait = 2
n = 0
while True:
try:
gpu_out = GPU_Obj.GPU_fn(params)
except:
time.sleep(wait)
print('n Waiting for GPU memory...')
n += 1
if n == 5:
raise Exception('Tried and failed %i times to allocate memory for opencl kernel.' % n)
continue
break

这种解决方法非常笨拙,尽管它在大多数情况下都能工作,但进程偶尔会抛出异常,我觉得应该有一个使用multiprocessing.queue或类似方法的更有效/更优雅的解决方案。然而,我不确定如何将其与PyOpenCL集成以进行GPU访问。

听起来你可以使用multiprocessing.Lock来同步对GPU:的访问

data_chunks = chunks(data,num_procs)
lock = multiprocessing.Lock()
for chunk in data_chunks:
if len(chunk) == 0:
continue
# Instantiates the process
p = multiprocessing.Process(target=test, args=(arg1,arg2, lock))
...

然后,在访问GPU:的test内部

with lock:  # Only one process will be allowed in this block at a time.
gpu_out = GPU_Obj.GPU_fn(params)

编辑:

要用游泳池做到这一点,你可以这样做:

# At global scope
lock = None
def init(_lock):
global lock
lock = _lock
data_chunks = chunks(data,num_procs)
lock = multiprocessing.Lock()
for chunk in data_chunks:
if len(chunk) == 0:
continue
# Instantiates the process
p = multiprocessing.Pool(initializer=init, initargs=(lock,))
p.apply(test, args=(arg1, arg2))
...

或者:

data_chunks = chunks(data,num_procs)
m = multiprocessing.Manager()
lock = m.Lock()
for chunk in data_chunks:
if len(chunk) == 0:
continue
# Instantiates the process
p = multiprocessing.Pool()
p.apply(test, args=(arg1, arg2, lock))

最新更新