我正在尝试在Jetson Nano上安装PyTorch,破坏Ubuntu 1804。我的推荐人是https://dev.to/evanilukhin/guide-to-install-pytorch-with-cuda-on-ubuntu-18-04-5217
当我尝试以下命令时,这就是我得到的:
(my_env) crigano@crigano-desktop:~$ python3.8 -m pip install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing
Collecting numpy
Using cached numpy-1.20.2-cp38-cp38-manylinux2014_aarch64.whl (12.7 MB)
Collecting ninja
Using cached ninja-1.10.0.post2.tar.gz (25 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Collecting pyyaml
Using cached PyYAML-5.4.1-cp38-cp38-manylinux2014_aarch64.whl (818 kB)
ERROR: Could not find a version that satisfies the requirement mkl
ERROR: No matching distribution found for mkl
如果您只想在裸金属Jetson Nano上使用PyTorch,只需将其与NVIDIA的预编译二进制轮子一起安装即可。其他包裹可以在Jetson动物园找到。
MKL是由英特尔";以优化当前和未来代英特尔®;CPU和GPU的代码"[PyPI]。显然,它确实在AMD等其他基于x86的芯片上运行(尽管英特尔历史上曾故意破坏非英特尔芯片库[维基百科]),但毫不奇怪,英特尔对支持ARM设备不感兴趣,也没有将MKL移植到ARM架构。
如果您的目标是在numpy
中使用MKL进行数学优化,那么openblas
是ARM的一个有效替代方案。CCD_ 3和CCD_;L4T PyTorch";Docker图像。您可以确认numpy
用numpy.__config__.show()
检测到它们。这是我在l4t-pytorch:r32.5.0-pth1.6-py3
图像上使用python 3.69中的numpy 1.12得到的:
blas_mkl_info:
NOT AVAILABLE
blis_info:
NOT AVAILABLE
openblas_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/lib/aarch64-linux-gnu']
language = c
define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/lib/aarch64-linux-gnu']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
NOT AVAILABLE
openblas_lapack_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/lib/aarch64-linux-gnu']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/lib/aarch64-linux-gnu']
language = c
define_macros = [('HAVE_CBLAS', None)]
因此,推测它将使用openblas
代替MKL进行数学优化。如果您的用例也是用于numpy
优化的,那么您也可以使用openblas
,并且不需要MKL。。。这是幸运的,因为它无论如何都不可用。