HADOOP - 执行 start-all.sh 的权限被拒绝



我正在尝试在我的笔记本电脑上安装Hadoop。我遵循了以下指南:https://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

当我尝试运行 start-all.sh 时,我得到这个:

vava@vava-ThinkPad:/usr/local/hadoop-3.1.1/sbin$ bash start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as vava in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
pdsh@vava-ThinkPad: localhost: rcmd: socket: Permission denied
Starting datanodes
pdsh@vava-ThinkPad: localhost: rcmd: socket: Permission denied
Starting secondary namenodes [vava-ThinkPad]
pdsh@vava-ThinkPad: vava-ThinkPad: rcmd: socket: Permission denied
Starting resourcemanager
resourcemanager is running as process 3748.  Stop it first.
Starting nodemanagers
pdsh@vava-ThinkPad: localhost: rcmd: socket: Permission denied

我试图遵循这个问题,但没有任何变化:

使用 start-all.sh 启动 Hadoop 进程会遇到问题

Hadoop 权限问题

编辑:在我尝试了所有选项之后,唯一似乎有效的选项是export PDSH_RCMD_TYPE=ssh。 现在问题出在名称节点和数据节点上。它无法正常启动:

vava@vava-ThinkPad:/usr/local/hadoop-3.1.1$ sbin/start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as vava in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
pdsh@vava-ThinkPad: localhost: ssh exited with exit code 1
Starting datanodes
localhost: ERROR: Cannot set priority of datanode process 10937
pdsh@vava-ThinkPad: localhost: ssh exited with exit code 1
Starting secondary namenodes [vava-ThinkPad]
Starting resourcemanager
Starting nodemanagers

创建一个新文件

/etc/pdsh/rcmd_default

写"ssh",然后保存并退出。确保输入返回字符并开始新行,否则将提示代码为 1 的 SSH 退出

echo "ssh" > /etc/pdsh/rcmd_default

我会检查:

  • export PDSH_RCMD_TYPE=ssh在终端中
  • 本地防火墙设置
  • 以 root 身份运行命令:sudo /usr/local/hadoop-3.1.1/sbin$ bash start-all.sh
  • chmod -R 755 /usr/local/hadoop-3.1.1

对于您的其他问题:

  • hadoop-env.sh中设置JAVA_HOME,并确保此文件中的所有其他选项都正确
  • 更改您的用户,Attempting to start all Apache Hadoop daemons as vava in 10 seconds."vava">是错误的,请尝试su -l hdfs然后运行脚本

就我而言,您需要确保将 RSA 复制到当前本地主机

ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub hadoop@localhost

假设您使用"hadoop"登录到节点主节点

最新更新