CUDA 8 Unified Memory on Pascal Titan X / GP102



这篇文章说,CUDA 8改进了Pascal GPU上的统一内存支持,以便"在支持平台上,使用默认操作系统分配器(例如'malloc'或'new')分配的内存可以使用相同的指针从GPU代码和CPU代码访问"。

我对此感到兴奋,并编写了一个小型测试程序,看看我的系统是否支持此功能:

#include <stdio.h>
#define CUDA_CHECK( call ) {
    cudaError_t code = ( call );
    if ( code != cudaSuccess ) {
        const char* msg = cudaGetErrorString( code );
        printf( "%s #%d: %sn", __FILE__, __LINE__, msg );
    }
}
#define N 10
__global__
void test_unified_memory( int* input, int* output )
{
    output[ threadIdx.x ] = input[ threadIdx.x ] * 2;
}
int main()
{
    int* input = (int*) malloc( N );
    int* output = (int*) malloc( N );
    for ( int i = 0; i < N; ++i ) input[ i ] = i;
    test_unified_memory <<< 1, N >>>( input, output );
    CUDA_CHECK( cudaDeviceSynchronize() );
    for ( int i = 0; i < N; ++i ) printf( "%d, ", output[ i ] );
    free( input );
    free( output );
}

但它没有用。

我想知道"支持平台"是什么意思。以下是我的系统配置:

$uname -r
3.10.0-327.el7.x86_64
$nvidia-smi
Tue Jan 10 14:46:11 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.26                 Driver Version: 375.26                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN X (Pascal)    Off  | 0000:01:00.0     Off |                  N/A |
| 36%   61C    P0    88W / 250W |      2MiB / 12189MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
$deviceQuery
NVIDIA_CUDA-7.5_Samples/bin/x86_64/linux/release/deviceQuery Starting...
 CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "TITAN X (Pascal)"
  CUDA Driver Version / Runtime Version          8.0 / 7.5
  CUDA Capability Major/Minor version number:    6.1
  Total amount of global memory:                 12189 MBytes (12781551616 bytes)
MapSMtoCores for SM 6.1 is undefined.  Default to use 128 Cores/SM
MapSMtoCores for SM 6.1 is undefined.  Default to use 128 Cores/SM
  (28) Multiprocessors, (128) CUDA Cores/MP:     3584 CUDA Cores
  GPU Max Clock rate:                            1531 MHz (1.53 GHz)
  Memory Clock rate:                             5005 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 3145728 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 7.5, NumDevs = 1, Device0 = TITAN X (Pascal)
Result = PASS

答案可能只是泰坦X/GP102不支持此功能。但是我找不到有关此的任何信息/文档。任何人都可以让我知道我的配置是否支持它,并指出我参考此类信息?谢谢。

正如评论中建议的那样,它可能与主机操作系统有关。那么,对主机的要求是什么,如何检查/修复?

这个新的统一内存功能似乎需要一个实验性的 Linux 内核补丁,该补丁尚未集成到任何主线内核树中。 它应该被视为未来的功能,而不是现在可以使用的功能。

编辑以补充一点,如注释中所述,您也在使用 CUDA 7.5,并且无论主机内核问题如何,您都需要使用 CUDA 8 来实现此功能。

相关内容

  • 没有找到相关文章