Python CNTK 速度比较 1 位 SGD 与 4 个 GPU 中的正常 SGD



我在带有Ubuntu(python 3.4)的Azure NC24 GPU VM中安装了CNTK的2.0.beta7版本。 该机器有 4 个 NVIDIA K80 GPU。构建信息:

            Build type: release
            Build target: GPU
            With 1bit-SGD: yes
            With ASGD: yes
            Math lib: mkl
            CUDA_PATH: /usr/local/cuda-8.0
            CUB_PATH: /usr/local/cub-1.4.1
            CUDNN_PATH: /usr/local
            Build Branch: HEAD
            Build SHA1: 8e8b5ff92eff4647be5d41a5a515956907567126
            Built by Source/CNTK/buildinfo.h$$0 on bbdadbf3455d
            Build Path: /home/philly/jenkins/workspace/CNTK-Build-Linux

我在分布式模式下运行 CIFAR 示例:

mpiexec -n 4 python TrainResNet_CIFAR10_Distributed.py -n resnet20 -q 32
Finished Epoch [1]: [Training] loss = 1.675002 * 50176, metric = 62.5% * 50176 112.019s (447.9 samples per second)
Finished Epoch [1]: [Training] loss = 1.675002 * 50176, metric = 62.5% * 50176 112.019s (447.9 samples per second)
Finished Epoch [1]: [Training] loss = 1.675002 * 50176, metric = 62.5% * 50176 112.018s (447.9 samples per second)
Finished Epoch [1]: [Training] loss = 1.675002 * 50176, metric = 62.5% * 50176 112.019s (447.9 samples per second)
Finished Epoch [2]: [Training] loss = 1.247423 * 50176, metric = 45.4% * 50176 8.210s (6111.3 samples per second)
Finished Epoch [2]: [Training] loss = 1.247423 * 50176, metric = 45.4% * 50176 8.210s (6111.4 samples per second)
Finished Epoch [2]: [Training] loss = 1.247423 * 50176, metric = 45.4% * 50176 8.210s (6111.8 samples per second)
Finished Epoch [2]: [Training] loss = 1.247423 * 50176, metric = 45.4% * 50176 8.210s (6111.6 samples per second)
...
...
Finished Epoch [160]: [Training] loss = 0.037745 * 49664, metric = 1.2% * 49664 7.883s (6300.4 samples per second)
Finished Epoch [160]: [Training] loss = 0.037745 * 49664, metric = 1.2% * 49664 7.883s (6299.7 samples per second)
Finished Epoch [160]: [Training] loss = 0.037745 * 49664, metric = 1.2% * 49664 7.884s (6299.7 samples per second)
Finished Epoch [160]: [Training] loss = 0.037745 * 49664, metric = 1.2% * 49664 7.884s (6299.2 samples per second)

但是,当我使用 1 位 SGD 运行它时,我得到:

mpiexec -n 4 python TrainResNet_CIFAR10_Distributed.py -n resnet20 -q 1 -a 50000
...
Finished Epoch [160]: [Training] loss = 0.059290 * 49664, metric = 2.1% * 49664 10.055s (4939.1 samples per second)
Finished Epoch [160]: [Training] loss = 0.059290 * 49664, metric = 2.1% * 49664 10.056s (4938.9 samples per second)
Finished Epoch [160]: [Training] loss = 0.059290 * 49664, metric = 2.1% * 49664 10.056s (4938.9 samples per second)
Finished Epoch [160]: [Training] loss = 0.059290 * 49664, metric = 2.1% * 49664 10.056s (4938.9 samples per second)

正如这里所解释的,1bit应该比普通的对应物快。任何帮助,不胜感激。

当 GPU 之间的通信时间与小批量的计算时间相比较大时,1 位 SGD 是一种有效的策略。

上面的实验有两个"问题":你正在训练的模型几乎没有参数(计算不是那么多),4 个 GPU 在同一台机器上(与通过网络相比,通信还不错)。 此外,在机器内部,CNTK 使用 NVIDIA nccl,它比 1 位使用的通用 MPI 实现优化得更好。更新:在发表此评论时,默认情况下不使用 NCCL。

最新更新