如何使用PyOpenCL在GPU上并行地将100个整数与另外100个整数相乘



有很多PyopenCL例子是关于对大小为4的向量进行算术运算的。如果我必须通过PyOpenCL同时使用Mac上的AMD GPU将100个整数与另外100个整数相乘,有人能提供并解释代码吗?由于最大矢量大小可以是16,我想知道我怎么能要求GPU做这个需要并行处理超过16个整数的操作。

我有AMD D500 firepro GPU。是否每个工作项(线程)都独立执行任务,如果是,则有24个计算单元,每个计算单元有255个单维工作项,[255,255,255]有三维工作项。这是否意味着我的GPU有6120个独立的工作项?

我为两个一维整数数组的入门级乘法做了一个简短的示例。请注意,如果您计划只乘以100个值,那么它不会比在CPU上做更快,因为您在复制数据等方面有很多开销。

import pyopencl as cl
import numpy as np
#this is compiled by the GPU driver and will be executed on the GPU
kernelsource = """  
__kernel void multInt(  __global int* res,
                        __global int* a,
                        __global int* b){
    int i = get_global_id(0);
    int N = get_global_size(0); //this is the dimension given as second argument in the kernel execution
    res[i] = a[i] * b[i];
}
"""
device = cl.get_platforms()[0].get_devices()[0]
context = cl.Context([device])
program = cl.Program(context, kernelsource).build()
queue = cl.CommandQueue(context)
#preparing input data in numpy arrays in local memory (i.e. accessible by the CPU)
N = 100
a_local = np.array(range(N)).astype(np.int32)
b_local = (np.ones(N)*10).astype(np.int32)
#preparing result buffer in local memory
res_local = np.zeros(N).astype(np.int32)
#copy input data to GPU-memory
a_buf = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=a_local)
b_buf = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=b_local)
#prepare result buffer in GPU-memory
res_buf = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, res_local.nbytes)
#execute previously compiled kernel on GPU
program.multInt(queue,(N,), None, res_buf, a_buf, b_buf)
#copy the result from GPU-memory to CPU-memory
cl.enqueue_copy(queue, res_local, res_buf)
print("result: {}".format(res_local))

对于PyOpenCL的文档:一旦你理解了GPGPU编程的工作原理和OpenCL的编程概念,PyOpenCL是相当简单的。

相关内容

最新更新