与从源代码手动安装 PaddlePaddle 的文档相比,PaddlePaddle 的文档 Docker 部署略有不同。
从 DockerHub 拉取容器后 Docker 部署中的文档状态:
docker pull paddledev/paddle
环境变量应该被设置并包含在docker run
中,即:
export CUDA_SO="$(ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')"
export DEVICES=$(ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:gpu-latest
export
命令似乎正在寻找libcuda*
和libnvidia*
/usr/lib64/
但在源代码编译的文档中,lib64/
的位置应该在/usr/local/cuda/lib64
中。
无论如何,lib64/
的位置都可以通过以下方法找到:
cat /etc/ld.so.conf.d/cuda.conf
此外,export 命令正在寻找/usr/local/cuda/
中似乎不存在的libnvidia*
,除了libnvidia-ml.so
:
/usr/local/cuda$ find . -name 'libnvidia*'
./lib64/stubs/libnvidia-ml.so
我想CUDA_SO
正在寻找的正确文件是
- usr/local/cuda/lib64/libcudart.so.8.0 /
- usr/local/cuda/lib64/libcudart.so.7.5
但这是对的吗?CUDA_SO
部署支持 GPU 的 PaddlePaddle 的环境变量是什么?
即使在设置了libcudart*
变量之后,docker 容器似乎也找不到 GPU 驱动程序,即:
user0@server1:~/dockdock$ echo CUDA_SO="$(ls $CUDA_CONFILE/libcuda* | xargs -I{} echo '-v {}:{}')"
CUDA_SO=-v /usr/local/cuda/lib64/libcudadevrt.a:/usr/local/cuda/lib64/libcudadevrt.a
-v /usr/local/cuda/lib64/libcudart.so:/usr/local/cuda/lib64/libcudart.so
-v /usr/local/cuda/lib64/libcudart.so.8.0:/usr/local/cuda/lib64/libcudart.so.8.0
-v /usr/local/cuda/lib64/libcudart.so.8.0.44:/usr/local/cuda/lib64/libcudart.so.8.0.44
-v /usr/local/cuda/lib64/libcudart_static.a:/usr/local/cuda/lib64/libcudart_static.a
user0@ server1:~/dockdock$ export CUDA_SO="$(ls $CUDA_CONFILE/libcuda* | xargs -I{} echo '-v {}:{}')"
user0@ server1:~/dockdock$ export DEVICES=$(ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
user0@ server1:~/dockdock$ docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:gpu-latest
root@bd25dfd4f824:/# git clone https://github.com/baidu/Paddle paddle
Cloning into 'paddle'...
remote: Counting objects: 26626, done.
remote: Compressing objects: 100% (23/23), done.
remote: Total 26626 (delta 3), reused 0 (delta 0), pack-reused 26603
Receiving objects: 100% (26626/26626), 25.41 MiB | 4.02 MiB/s, done.
Resolving deltas: 100% (18786/18786), done.
Checking connectivity... done.
root@bd25dfd4f824:/# cd paddle/demo/quick_start/
root@bd25dfd4f824:/paddle/demo/quick_start# sed -i 's|--use_gpu=false|--use_gpu=true|g' train.sh
root@bd25dfd4f824:/paddle/demo/quick_start# bash train.sh
I0410 09:25:37.300365 48 Util.cpp:155] commandline: /usr/local/bin/../opt/paddle/bin/paddle_trainer --config=trainer_config.lr.py --save_dir=./output --trainer_count=4 --log_period=100 --num_passes=15 --use_gpu=true --show_parameter_stats_period=100 --test_all_data_in_one_period=1
F0410 09:25:37.300940 48 hl_cuda_device.cc:526] Check failed: cudaSuccess == cudaStat (0 vs. 35) Cuda Error: CUDA driver version is insufficient for CUDA runtime version
*** Check failure stack trace: ***
@ 0x7efc20557daa (unknown)
@ 0x7efc20557ce4 (unknown)
@ 0x7efc205576e6 (unknown)
@ 0x7efc2055a687 (unknown)
@ 0x895560 hl_specify_devices_start()
@ 0x89576d hl_start()
@ 0x80f402 paddle::initMain()
@ 0x52ac5b main
@ 0x7efc1f763f45 (unknown)
@ 0x540c05 (unknown)
@ (nil) (unknown)
/usr/local/bin/paddle: line 109: 48 Aborted (core dumped) ${DEBUGGER} $MYDIR/../opt/paddle/bin/paddle_trainer ${@:2}
[1]: http://www.paddlepaddle.org/doc/build/docker_install.html
[2]: http://paddlepaddle.org/doc/build/build_from_source.html
如何部署支持 GPU 的 PaddlePaddle Docker 容器?
另外,中文:https://github.com/PaddlePaddle/Paddle/issues/1764
请参考 http://www.paddlepaddle.org/develop/doc/getstarted/build_and_install/docker_install_en.html
推荐的方法是使用nvidia-docker
.
请按照本教程先安装nvidia-docker
。
现在,您可以运行 GPU 映像:
docker pull paddlepaddle/paddle
nvidia-docker run -it --rm paddlepaddle/paddle:0.10.0rc2-gpu /bin/bash