NUMBA CUDA比并行CPU慢,即使对于巨大的矩阵也是如此



网上只有几个关于使用cuda进行numba的例子,我发现它们都比并行CPU方法慢。CUDA目标和模板的矢量化更糟糕,所以我尝试创建一个自定义内核。你在任何地方都能找到的一篇博客文章是https://gist.github.com/mrocklin/9272bf84a8faffdbbe2cd44b4bc4ce3c.这个例子是一个简单的模糊过滤器:

import numpy as np
import time
from numba import njit, prange,cuda
import timeit
import numba.cuda

@numba.cuda.jit
def smooth_gpu(x, out):
i, j = cuda.grid(2)
n, m = x.shape
if 1 <= i < n - 1 and 1 <= j < m - 1:
out[i, j] = (x[i - 1, j - 1] + x[i - 1, j] + x[i - 1, j + 1] +
x[i    , j - 1] + x[i    , j] + x[i    , j + 1] +
x[i + 1, j - 1] + x[i + 1, j] + x[i + 1, j + 1]) / 9
x_gpu = np.ones((10000, 10000), dtype='float32')
out_gpu = np.zeros((10000, 10000), dtype='float32')
threadsperblock = (16, 16)
blockspergrid_x = math.ceil(x_gpu.shape[0] / threadsperblock[0])
blockspergrid_y = math.ceil(x_gpu.shape[1] / threadsperblock[1])
blockspergrid = (blockspergrid_x, blockspergrid_y)
# run on gpu
smooth_gpu[blockspergrid, threadsperblock](x_gpu, out_gpu) # compile before measuring time
start_time = time.time()
smooth_gpu[blockspergrid, threadsperblock](x_gpu, out_gpu)
print("GPU Time: {0:1.6f}s ".format(time.time() - start_time))

CPU版本:

x_cpu = np.ones((10000, 10000), dtype='float32')
out_cpu = np.zeros((10000, 10000), dtype='float32')

@njit(nopython=True,parallel=True)
def smooth_cpu(x, out_cpu):
for i in prange(1,np.shape(x)[0]-1):
for j in range(1,np.shape(x)[1]-1):
out_cpu[i, j] =  (x[i - 1, j - 1] + x[i - 1, j] + x[i - 1, j + 1] + x[i    , j - 1] + x[i    , j] + x[i    , j + 1] +x[i + 1, j - 1] + x[i + 1, j] + x[i + 1, j + 1]) / 9
# run on cpu
smooth_cpu(x_cpu, out_cpu) # compile before measuring time
start_time = time.time()
smooth_cpu(x_cpu, out_cpu)
print("CPU Time: {0:1.6f}s ".format(time.time() - start_time))

GPU版本的时间大约为500毫秒,CPU版本的时间为50毫秒。发生了什么事?

我要指出两件事:

  1. 您在GPU版本的计时中包括将输入阵列从主机传输到设备所需的时间,以及从设备传输到主机的结果。如果这是你比较的目的,那就顺其自然吧;结论是GPU不适合这个任务(以一种有趣的方式(。

  2. GPU代码在给出正确结果的同时没有组织好以获得良好的性能。问题就在这里:

    i, j = cuda.grid(2)
    

    再加上这些索引用于访问数据的顺序:

    out[i, j] = (x[i - 1, j - 1] ...
    

    这导致GPU中的低效访问。我们可以通过颠倒上面描述的两个顺序中的一个来解决这个问题。

以下是考虑到上述两个问题而稍微调整的代码:

$ cat t29a.py
import numpy as np
import time
from numba import njit, prange,cuda
import timeit
import numba.cuda

x_cpu = np.ones((10000, 10000), dtype='float32')
out_cpu = np.zeros((10000, 10000), dtype='float32')

@njit(parallel=True)
def smooth_cpu(x, out_cpu):
for i in prange(1,x.shape[0]-1):
for j in range(1,x.shape[1]-1):
out_cpu[i, j] =  (x[i - 1, j - 1] + x[i - 1, j] + x[i - 1, j + 1] + x[i    , j - 1] + x[i    , j] + x[i    , j + 1] +x[i + 1, j - 1] + x[i + 1, j] + x[i + 1, j + 1]) / 9
# run on cpu
smooth_cpu(x_cpu, out_cpu) # compile before measuring time
start_time = time.time()
smooth_cpu(x_cpu, out_cpu)
print("CPU Time: {0:1.6f}s ".format(time.time() - start_time))
$ python t29a.py
CPU Time: 0.161944s
$ cat t29.py
import numpy as np
import time
from numba import njit, prange,cuda
import timeit
import numba.cuda
import math
@numba.cuda.jit
def smooth_gpu(x, out):
j, i = cuda.grid(2)
m, n = x.shape
if 1 <= i < n - 1 and 1 <= j < m - 1:
out[i, j] = (x[i - 1, j - 1] + x[i - 1, j] + x[i - 1, j + 1] +
x[i    , j - 1] + x[i    , j] + x[i    , j + 1] +
x[i + 1, j - 1] + x[i + 1, j] + x[i + 1, j + 1]) / 9
x = np.ones((10000, 10000), dtype='float32')
out = np.zeros((10000, 10000), dtype='float32')
x_gpu = cuda.to_device(x)
out_gpu = cuda.device_array_like(out)
threadsperblock = (16, 16)
blockspergrid_x = math.ceil(x_gpu.shape[0] / threadsperblock[0])
blockspergrid_y = math.ceil(x_gpu.shape[1] / threadsperblock[1])
blockspergrid = (blockspergrid_x, blockspergrid_y)
# run on gpu
smooth_gpu[blockspergrid, threadsperblock](x_gpu, out_gpu) # compile before measuring time
cuda.synchronize()
start_time = time.time()
smooth_gpu[blockspergrid, threadsperblock](x_gpu, out_gpu)
cuda.synchronize()
print("GPU Time: {0:1.6f}s ".format(time.time() - start_time))
$ python t29.py
GPU Time: 0.021776s
$

因此,我们可以看到,如果我们针对这两个问题进行调整,GPU(在我的情况下是GTX 960(的速度大约是CPU的8倍。这样的测量在一定程度上取决于用于比较的CPU和GPU——你不应该认为我的测量值与你的测量值相当——你最好运行这些修改后的代码进行比较。然而,数据传输时间肯定大大超过了GPU的计算时间,在我的情况下也超过了CPU的计算时间。这意味着(至少在我的情况下,在任何方面都不是一个特别快的系统(,即使我们将GPU计算时间减少到零,传输数据的成本仍将超过CPU计算时间成本。

因此,当你遇到这样的情况时,就不可能获胜。当时唯一可以给出的建议是";不要那样做";,即找到GPU要解决的更有趣和更复杂的问题。如果我们在计算上让问题变得非常简单,比如这个问题,或者向量加法,而这是你唯一想在GPU上做的事情,那么与在CPU上做这件事相比,这几乎从来都不是一个有趣的比较。希望您能看到,增大矩阵在这里没有多大帮助,因为它还会影响数据传输时间/成本。

如果我们考虑到数据传输成本(并且不要在GPU代码中犯下严重影响性能的错误(,根据我的测试,GPU比CPU更快。如果我们包括数据传输成本,对于这个非常简单的问题,GPU不可能比CPU更快(即使GPU计算时间减少到零(。

毫无疑问,可以做更多的工作来稍微改进GPU的情况(例如,更改块形状、使用共享内存等(,但我个人不希望花时间打磨不感兴趣的东西。

你可以在这里获得更多关于Numba GPU内存管理的描述。

这里是对与索引排序相关的内存效率问题的一般描述

我发现这种比较很有趣,并想研究重用已编译的内核、cuda流和随机数据的影响,以确保没有花哨的编译器优化扭曲了我们所看到的。

我修改了Robert Crovella发布的代码样本,并在学校的一个普通ML平台上运行了脚本:

代码

import numpy as np
from time import perf_counter
from numba import njit, prange,cuda
# cpuinfo is a third party package from here:
#   https://github.com/workhorsy/py-cpuinfo
# or you can just install it using pip with:
#   python -m pip install -U py-cpuinfo
from cpuinfo import get_cpu_info
print("Some diagnostic info for the system running this script:")
# prints information about the cuda GPU
cuda.detect()
print()
# Prints a json string describing the cpu
s = get_cpu_info()
print("Cpu info")
for k,v in s.items():
print(f"t{k}: {v}")
print()
cpu_s1 = "CPU execution time:"
cpu_s2 = "CPU full setup/execution time:"
gpu_s1 = "GPU kernel execution time:"
gpu_s2 = "GPU full kernel setup/execution time:"
l = len(gpu_s2) + 1
# using randomized floats to ensure there isn't some compiler optimization that
# recognizes that all values of the x array are constant 1's and does something
# goofy under the hood. Each timing scenario will then use a copy of this array.
common_x = np.random.random((10000, 10000)).astype(np.float32)
def time_njit(n_loops=2):
start_time_full_function = perf_counter()
@njit(parallel=True,nogil=True)
def smooth_cpu(x, out):
h,w = x.shape
for i in prange(1,h-1):
for j in range(1,w-1):
out[i, j] =  (x[i - 1, j - 1] + x[i - 1, j] +
x[i - 1, j + 1] + x[i    , j - 1] +
x[i    , j]     + x[i    , j + 1] +
x[i + 1, j - 1] + x[i + 1, j] +
x[i + 1, j + 1]) / 9

pre_x = np.ones((10,10),dtype=common_x.dtype)
pre_out = np.ones((10,10),dtype=common_x.dtype)
_x = common_x.copy()
_out = np.zeros_like(_x)
# run on cpu
smooth_cpu(pre_x, pre_out) # compile before measuring time
start_time = perf_counter()
for _ in range(n_loops):
# realistically, we wouldn't typically run just a single blurring pass
smooth_cpu(_x, _out)
smooth_cpu(_out,_x)
end_time = perf_counter()
end_time_full_function = perf_counter()
print(f"{cpu_s1:<{l}} {end_time - start_time:1.6f}s running {n_loops} loops"
f"n{cpu_s2:<{l}} {end_time_full_function - start_time_full_function:1.6f}s")
return _x

def time_cuda(n_loops=2):
"""There is room for optimization in how we use cuda.shared.array memory on the GPU
-- where I'm not aware of any analogues tricks for the cpu function -- that would
allow us to minimize the number of times each thread-block needs to access data in
the GPU's global memory. But such an implementation would take us deeper into the
weeds than this toy problem calls for.
Maybe if I need to take a break from my other work later I'll come back to this
and flesh out an example of what I mean.
"""
start_time_full_function = perf_counter()
@cuda.jit
def smooth_gpu(x, out):
"""slight change to the cuda kernel. This version uses **striding** to reduce
processor overhead spent allocating and deallocating a lot of thread blocks
that ultimately have each thread compute a single calculation before being
disposed of.
This way we offset some of the overhead cost spent on block allocation by
making each block do a bit more work.
Note: For this to work right, we have to allocate fewer blocks with
our `blockspergrid_j` and `blockspergrid_i` variables.
"""
jstart, istart = cuda.grid(2)
jstep, istep = cuda.gridsize(2)
rows,cols = x.shape
# note that for strided kernels, thread indices
# are completely independent of the data size/shape
for i in range(istart+1,rows-1,istep):
for j in range(jstart+1,cols-1,jstep):
# Because we created x and out using column-major memory ordering,
# we want to make sure the most frequently changing index (j)
# is iterating through the last dimension of the array.
out[i,j] = (x[i - 1, j - 1] + x[i - 1, j] + x[i - 1, j + 1] +
x[i    , j - 1] + x[i    , j] + x[i    , j + 1] +
x[i + 1, j - 1] + x[i + 1, j] + x[i + 1, j + 1]) / 9
_x = common_x.copy()
_out = np.zeros_like(_x)
stream = cuda.stream()
x_gpu = cuda.to_device(_x,stream)
out_gpu = cuda.to_device(_out,stream)
tpbj = 16
tpbi = 16
threadsperblock = tpbj,tpbi
blockspergrid_j = (_x.shape[0]+tpbj-1) // tpbj
blockspergrid_i = (_x.shape[1]+tpbi-1) // tpbi
# reduce the number of blocks in each axis
# by a quarter to give room for striding
blockspergrid = (blockspergrid_j//4, blockspergrid_i//4)
# run on gpu
compiled = smooth_gpu[blockspergrid, threadsperblock, stream] # compile before measuring time
start_time = perf_counter()
for _ in range(n_loops):
# realistically, we wouldn't typically run just a single blurring pass
compiled(x_gpu, out_gpu)
compiled(out_gpu,x_gpu)
x_gpu.copy_to_host(_out,stream)
stream.synchronize()
end_time = perf_counter()
end_time_full_function = perf_counter()
print(f"{gpu_s1:<{l}} {end_time-start_time:1.6f}s running {n_loops} loops"
f"n{gpu_s2:<{l}} {end_time_full_function-start_time_full_function:1.6f}s")
return _out
if __name__ == '__main__':
a = time_njit(1)
b = time_cuda(1)
assert np.allclose(a,b),"The two functions didn't actually compute the same results"
print(f"{'    '*4}Outputs are equivalent")
a = time_njit(5)
b = time_cuda(5)
assert np.allclose(a,b),"The two functions didn't actually compute the same results"
print(f"{'    '*4}Results are equivalent")
a = time_njit(10)
b = time_cuda(10)
assert np.allclose(a,b),"The two functions didn't actually compute the same results"
print(f"{'    '*4}Results are equivalent")
a = time_njit(20)
b = time_cuda(20)
assert np.allclose(a,b),"The two functions didn't actually compute the same results"
print(f"{'    '*4}Results are equivalent")

输出:

Some diagnostic info for the system running this script:
Found 1 CUDA devices
id 0    b'GeForce RTX 2080 Ti'                              [SUPPORTED]
compute capability: 7.5
pci device id: 0
pci bus id: 1
Summary:
1/1 devices are supported
Cpu info:
python_version: 3.8.8.final.0 (64 bit)
cpuinfo_version: [7, 0, 0]
cpuinfo_version_string: 7.0.0
arch: X86_64
bits: 64
count: 8
arch_string_raw: AMD64
vendor_id_raw: GenuineIntel
brand_raw: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
hz_advertised_friendly: 4.0000 GHz
hz_actual_friendly: 4.0010 GHz
hz_advertised: [4000000000, 0]
hz_actual: [4001000000, 0]
l2_cache_size: 1048576
stepping: 3
model: 60
family: 6
l3_cache_size: 8388608
flags: ['3dnow', 'abm', 'acpi', 'aes', 'apic', 'avx', 'avx2', 'bmi1', 'bmi2', 'clflush', 'cmov', 'cx16', 'cx8', 'de', 'dts', 'erms', 'est', 'f16c', 'fma', 'fpu', 'fxsr', 'ht', 'hypervisor', 'ia64', 'invpcid', 'lahf_lm', 'mca', 'mce', 'mmx', 'movbe', 'msr', 'mtrr', 'osxsave', 'pae', 'pat', 'pbe', 'pcid', 'pclmulqdq', 'pdcm', 'pge', 'pni', 'popcnt', 'pse', 'pse36', 'rdrnd', 'sep', 'serial', 'smep', 'ss', 'sse', 'sse2', 'sse4_1', 'sse4_2', 'ssse3', 'tm', 'tm2', 'tsc', 'vme', 'xsave', 'xtpr']
l2_cache_line_size: 256
l2_cache_associativity: 6
Time comparisons for CPU vs GPU implementations:
CPU execution time:                    0.327143s running 1 loops
CPU full setup/execution time:         0.980959s
GPU kernel execution time:             0.088015s running 1 loops
GPU full kernel setup/execution time:  0.868085s
Outputs are equivalent
CPU execution time:                    1.539007s running 5 loops
CPU full setup/execution time:         2.134781s
GPU kernel execution time:             0.097627s running 5 loops
GPU full kernel setup/execution time:  0.695104s
Outputs are equivalent
CPU execution time:                    3.463488s running 10 loops
CPU full setup/execution time:         4.310506s
GPU kernel execution time:             0.122363s running 10 loops
GPU full kernel setup/execution time:  0.655500s
Outputs are equivalent
CPU execution time:                    6.416840s running 20 loops
CPU full setup/execution time:         7.011254s
GPU kernel execution time:             0.158903s running 20 loops
GPU full kernel setup/execution time:  0.723226s
Outputs are equivalent
CPU execution time:                    9.285086s running 30 loops
CPU full setup/execution time:         9.890282s
GPU kernel execution time:             0.209807s running 30 loops
GPU full kernel setup/execution time:  0.728618s
Outputs are equivalent
CPU execution time:                    12.610949s running 40 loops
CPU full setup/execution time:         13.177427s
GPU kernel execution time:             0.253696s running 40 loops
GPU full kernel setup/execution time:  0.836536s
Outputs are equivalent
CPU execution time:                    15.376767s running 50 loops
CPU full setup/execution time:         15.976361s
GPU kernel execution time:             0.289626s running 50 loops
GPU full kernel setup/execution time:  0.841918s
Outputs are equivalent
Process finished with exit code 0

老实说,这些结果既符合我的预期,也不符合我的期望。我曾预计,至少在单循环函数调用中,CPU的实现会优于GPU,但事实并非如此v/尽管如此,随着循环数量的增加,CPU的时间成本似乎呈线性增加,这是意料之中的事。

至于GPU性能,我真的不知道为什么增加循环计数的时间成本似乎是对数增长(我必须绘制数据点才能更清楚地看到它(。

无论如何,你看到的结果会因你的机器而异,但我很好奇GPU的结果与CPU的计算水平相匹配。

相关内容

最新更新