我已经在linux集群中安装了Hadoop。当我尝试通过命令启动服务器时$bin/start-all.sh,我得到以下错误:
mkdir: cannot create directory `/var/log/hadoop/spuri2': Permission denied
chown: cannot access `/var/log/hadoop/spuri2': No such file or directory
/home/spuri2/spring_2012/Hadoop/hadoop/hadoop-1.0.2/bin/hadoop-daemon.sh: line 136: /var/run/hadoop/hadoop-spuri2-namenode.pid: Permission denied
head: cannot open `/var/log/hadoop/spuri2/hadoop-spuri2-namenode-gpu02.cluster.out' for reading: No such file or directory
localhost: /home/spuri2/.bashrc: line 10: /act/Modules/3.2.6/init/bash: No such file or directory
localhost: mkdir: cannot create directory `/var/log/hadoop/spuri2': Permission denied
localhost: chown: cannot access `/var/log/hadoop/spuri2': No such file or directory
我已经将conf/hadoop-env.sh中的log directory参数配置为/tmp目录,还将core-site.xml中的"hadoop.tmp.dir"配置为/tmp/目录。由于我无法访问/var/log目录,但hadoop守护进程仍在尝试写入/var/log目录并失败。
我想知道为什么会发生这种事?
您必须在"core.site.xml"文件中而不是在hadoop-env.sh 中写入此目录
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/Directory_hadoop_user_have_permission/temp/${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
简而言之,我面临这个问题是因为在大学集群中安装了多个hadoop。以root用户身份安装hadoop会打乱我的本地hadoop安装。
Hadoop守护进程没有启动的原因是它无法写入具有root权限的某些文件。我以普通用户的身份运行Hadoop。这个问题的出现是因为我们大学的系统管理员以root用户身份安装了Hadoop,所以当我开始本地安装Hadoop时,root安装配置文件比我的本地Hadoop配置文件优先。这花了很长时间才解决,但在以root用户身份卸载hadoop后,问题得到了解决。
我以前也会遇到同样的错误,如果你在配置标签下添加了,那么在运行对superuser:su-username(这是拥有hadoop目录所有权的用户)的更改之前,请尝试执行start-all.sh
确保您已经添加了必要的中间配置标签,如教程中所述:
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/