我在代码中经常具有此附加模式。基本上,它等于第一个用于过滤大数据集的内核,其中返回的选定条目将非常稀疏,然后是第二个内核,用于执行在大量降低的数据集中进行更多涉及的计算。
>似乎cudastreamsynchronize几乎是多余的,但我看不到它的任何方式。
- 是否有替代模式可以避免内核之间的同步?
- CUDA动态并行性会以任何方式有所帮助吗?
示例代码:
/* Pseudocode. Won't Compile */
/* Please ignore silly mistakes/syntax and inefficiant/incorrect simplifications */
__global__ void bar( const float * dataIn, float * dataOut, unsigned int * counter_ptr )
{
< do some computation >
if (bConditionalComputedAboveIsTrue)
{
const unsigned int ind = atomicInc(counter_ptr, (unsigned int)(-1));
dataOut[ ind ] = resultOfAboveComputation;
}
}
int foo( float * d_datain, float* d_tempbuffer, float* d_output, cudaStream_t stream ){
/* Initialize a counter that will be updated by the bar kernel */
unsigned int * counter_ptr;
cudaMalloc( &counter_ptr, sizeof( unsigned int) ); //< Create a Counter
cudaMemsetAsync(counter_ptr, 0, sizeof(unsigned int), stream); //<Initially Set the Counter to 0
dim3 threadsInit(16,16,1);
dim3 gridInit(256, 1, 1);
/* Launch the Filtering Kernel. This will update the value in counter_ptr*/
bar<<< gridInit, threadsInit, 0, stream >>>( d_datain, d_tempbuffer, counter_ptr );
/* Download the count and synchronize the stream */
unsigned int count;
cudaMemcpyAsync(&count, counter_ptr, sizeof(unsigned int), cudaMemcpyDeviceToHost, stream);
cudaStreamSynchronize( stream ); //< Is there any way around this synchronize?
/* Compute the grid parameters and launch a second kernel */
dim3 bazThreads(128,1,1);
dim3 bazGrid( count/128 + 1, 1, 1); //< Here I use the counter modified in the prior kernel to set the grid parameters
baz<<< bazGrid, bazThreads, 0, stream >>>( d_tempbuffer, d_output );
/* cleanup */
cudaFree(counter_ptr);
}
,而不是更改第二个内核中的块数量,您可以使用固定的块计数,并使块适应了他们所做的工作量。
例如。启动更多的块,如果没有工作,请尽早退出。或启动足够的块来填充设备,并在工作上进行每个块循环。网格式循环是这样做的好方法。
也可以选择使用动态并行性将内核启动本身(以及对网格大小的决定(移至设备。