为什么np.einsum()可以在GPU上运行,如果数组在GPU中定义?



我最近意识到,只要在GPU中定义变量(或数组),numpy函数的运行速度就和cpu一样快。我找不到一种方法来监控numpy函数是否在GPU上实际执行。如果有人有类似的经历,请分享你的答案。谢谢。

我使用谷歌colabpro GPU运行时。

下面是两组代码:

import numpy as np
import cupy as cp

设置1:

a = np.random.randn(500, 500, 500)
b = np.random.randn(500, 500, 500)   
start_time = time.time()
for i in range(1): 
c = np.einsum('ijk,ikm->ijm', a, b)        
end_time = time.time()
print('forwar gpu time')
print(end_time - start_time)
forwar gpu time
55.88586902618408

组2:

a = cp.random.randn(500, 500, 500)   # change to cupy
b = cp.random.randn(500, 500, 500)   # change to cupy
start_time = time.time()
for i in range(1): 
c = np.einsum('ijk,ikm->ijm', a, b)    # remain numpy    
end_time = time.time()
print('forwar gpu time')
print(end_time - start_time)
forwar gpu time
0.0009937286376953125

让我们来看看einsumfunc.py中的numpy.einsum:

@array_function_dispatch(_einsum_dispatcher, module='numpy')
def einsum(*operands, out=None, optimize=False, **kwargs):

嗯,那个装饰看起来很有前途;你实际上是在描述一个函数在参数类型上调度。让我们来看看overrides.py中的array_function_dispatch

def array_function_dispatch(dispatcher, module=None, verify=True,
docs_from_dispatcher=False):
"""Decorator for adding dispatch with the __array_function__ protocol.
See NEP-18 for example usage.

这将我们带到了更可读的NEP-18:"我们提出__array_function__协议,允许NumPy函数的参数定义该函数如何对其进行操作……"所以NumPy函数检查__array_function____array_ufunc__(NEP-13)的参数。

CuPy在core.pyx(Cython文件)中定义了cupy.ndarray.__array_function__。它在类似组织的CuPy模块中查找NumPy函数的名称,并在参数上调用它。所以当你调用np.einsum(...)时,它最终还是找到了cp.einsum(...):

cdef class ndarray:
...
...
...
def __array_function__(self, func, types, args, kwargs):
try:
module = functools.reduce(
getattr, func.__module__.split('.')[1:], cupy)
cupy_func = getattr(module, func.__name__)
except AttributeError:
return NotImplemented
if cupy_func is func:
# avoid NumPy func
return NotImplemented
for t in types:
if t not in _HANDLED_TYPES:
return NotImplemented
return cupy_func(*args, **kwargs)

相关内容

最新更新