gpucompute*在slurm集群中已关闭



我的gpucompute节点处于关闭状态,无法在GPU节点上发送作业。在遵循了网络上的所有解决方案后,我无法返回我的"关闭GPU"节点。在这个问题之前,我在英伟达驱动程序配置中遇到了一个错误,我无法通过"Nvidia-smi"检测到GPU,在通过运行"Nvidia-Linux-x86_64-410.79.run--no-drm"解决了这个错误后,我遇到了这个错误,这是因为节点处于关闭状态。非常感谢您的帮助!

command: sbatch md1.s
sbatch: error: Batch job submission failed: Requested node configuration is not available
command:  sinfo
PARTITION   AVAIL  TIMELIMIT  NODES  STATE NODELIST
gpucompute*    up   infinite      1  down* fwb-lab-tesla1
command:  sinfo -R
REASON               USER      TIMESTAMP           NODELIST
Not responding       slurm     2020-09-25T13:13:19 fwb-lab-tesla1
command: sinfo -Nl
Fri Sep 25 16:35:25 2020
NODELIST        NODES   PARTITION       STATE CPUS    S:C:T MEMORY TMP_DISK WEIGHT AVAIL_FE REASON              
fwb-lab-tesla1      1 gpucompute*       down*   32   32:1:1  64000        0      1   (null)Not responding 

command: vim /etc/slurm/slurm.conf
# slurm.conf file generated by configurator easy.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=FWB-Lab-Tesla
#ControlAddr=137.72.38.102
#
MailProg=/bin/mail
MpiDefault=none
#MpiParams=ports=#-#
ProctrackType=proctrack/cgroup
ReturnToService=1
SlurmctldPidFile=/var/run/slurmctld.pid
#SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
#SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
#SlurmUser=slurm
SlurmdUser=root
StateSaveLocation=/var/spool/slurm/StateSave
SwitchType=switch/none
TaskPlugin=task/cgroup
#
#
# TIMERS
#KillWait=30
command: ls /etc/init.d
functions  livesys  livesys-late  netconsole  network  README
command: nvidia-smi
Fri Sep 25 16:35:01 2020    
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79       Driver Version: 410.79       CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN V             Off  | 00000000:02:00.0 Off |                  N/A |
| 24%   32C    P8    N/A /  N/A |      0MiB / 12036MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  TITAN V             Off  | 00000000:03:00.0 Off |                  N/A |
| 23%   35C    P8    N/A /  N/A |      0MiB / 12036MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  TITAN V             Off  | 00000000:83:00.0 Off |                  N/A |
| 30%   44C    P8    N/A /  N/A |      0MiB / 12036MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  TITAN V             Off  | 00000000:84:00.0 Off |                  N/A |
| 31%   42C    P8    N/A /  N/A |      0MiB / 12036MiB |      6%      Default |
---------------------------------------------------------------------------+
         
----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |

您提到的问题可能会阻止gpucompute上的slurmd守护进程启动。您应该能够通过运行systemctl status slurmd或Linux发行版的等效命令来确认这一点。

slurmd日志可能包含类似的行

slurmd[1234]: fatal: can't stat gres.conf file /dev/nvidia0: No such file or directory

尝试使用重新启动

systemctl start slurmd

一旦您确定nvidia-smi响应正确。

我的问题用以下说明解决了。请记住,无论何时重新启动系统,都需要在重新启动后输入命令。感谢琼·布莱恩解决了这个问题!


slurmd -Dcvvv
reboot
ps -ef | grep slurm
kill xxxx (this is Process id number in the output of previous ps ef command)
nvidia-smi
systemctl start slurmctld
systemctl start slurmd
scontrol update nodename=fwb-lab-tesla1 state=idle
now you can run the jobs on the GPU nodes!
Cheers

最新更新