向纱簇提交火花作业时的"Bad substitution"



我正在使用yarn-cluster作为SparkPi示例程序的主程序对纱线集群进行烟雾测试。下面是命令行:

  $SPARK_HOME/bin/spark-submit --master yarn-cluster 
 --executor-memory 8G --executor-cores 240 --class org.apache.spark.examples.SparkPi 

/目标/scala - 2.11/例子spark-examples-1.4.1-hadoop2.7.1.jar

纱线接受工作,但随后抱怨"坏替换"。也许是在hdp.version上??

15/09/01 21:54:05 INFO yarn.Client: Application report for application_1441066518301_0013 (state: ACCEPTED)
15/09/01 21:54:05 INFO yarn.Client:
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1441144443866
     final status: UNDEFINED
     tracking URL: http://yarnmaster-8245.lvs01.dev.ebayc3.com:8088/proxy/application_1441066518301_0013/
     user: stack
15/09/01 21:54:06 INFO yarn.Client: Application report for application_1441066518301_0013 (state: ACCEPTED)
15/09/01 21:54:10 INFO yarn.Client: Application report for application_1441066518301_0013 (state: FAILED)
15/09/01 21:54:10 INFO yarn.Client:
     client token: N/A
     diagnostics: Application application_1441066518301_0013 failed 2 times due to AM Container for appattempt_1441066518301_0013_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://yarnmaster-8245.lvs01.dev.ebayc3.com:8088/cluster/app/application_1441066518301_0013Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e03_1441066518301_0013_02_000001
Exit code: 1
Exception message: /mnt/yarn/nm/local/usercache/stack/appcache/
application_1441066518301_0013/container_e03_1441066518301_0013_02_000001/
launch_container.sh: line 24: $PWD:$PWD/__hadoop_conf__:$PWD/__spark__.jar:$HADOOP_CONF_DIR:
/usr/hdp/current/hadoop-client/*::$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:
/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-.6.0.${hdp.version}.jar:
/etc/hadoop/conf/secure: bad substitution
Stack trace: ExitCodeException exitCode=1: /mnt/yarn/nm/local/usercache/stack/appcache/application_1441066518301_0013/container_e03_1441066518301_0013_02_000001/launch_container.sh: line 24: $PWD:$PWD/__hadoop_conf__:$PWD/__spark__.jar:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-hdfs-client/*:/usr/hdp/current/hadoop-hdfs-client/lib/*:/usr/hdp/current/hadoop-yarn-client/*:/usr/hdp/current/hadoop-yarn-client/lib/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure: bad substitution
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
    at org.apache.hadoop.util.Shell.run(Shell.java:456)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

这里需要注意的是:

/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-.6.0.${hdp.version}.jar:
/etc/hadoop/conf/secure: bad substitution

"sh"链接到bash:

$ ll /bin/sh
lrwxrwxrwx 1 root root 4 Sep  1 05:48 /bin/sh -> bash

这是由于hdp.version没有被正确替换引起的。您必须在$SPARK_HOME/conf下的java-opts文件中设置hdp.version

你必须设置

spark.driver.extraJavaOptions -Dhdp.version=XXX 
spark.yarn.am.extraJavaOptions -Dhdp.version=XXX
$SPARK_HOME/confspark-defaults.conf中的

,其中XXX为hdp版本。

如果您正在使用spark与hdp,那么您必须做以下事情:

$SPARK_HOME/conf/spark-defaults.conf

中添加这些条目
spark.driver.extraJavaOptions -Dhdp.version=2.2.0.0-2041 (your installed HDP version)
spark.yarn.am.extraJavaOptions -Dhdp.version=2.2.0.0-2041 (your installed HDP version)

$SPARK_HOME/conf中创建一个名为java-opts的文件,并将安装的HDP版本添加到该文件中,如下所示:

-Dhdp.version=2.2.0.0-2041 (your installed HDP version)

查看当前安装的hdp版本,请在集群中执行以下命令:

hdp-select status hadoop-client

我也有同样的问题:

launch_container.sh: line 24: $PWD:$PWD/__hadoop_conf__:$PWD/__spark__.jar:$HADOOP_CONF_DIR:/usr/hdp/current/hadoop-client/*::$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure: bad substitution

因为我找不到任何/usr/hdp/${hdp。/hadoop/lib/hadoop-lzo*文件,我只是编辑mapred-site.xml并删除"/usr/黄芪丹参滴丸/$ {hdp.version}/hadoop/lib/hadoop-lzo-0.6.0。$ {hdp.version} . jar: "

  1. 转到ambari-yarn.

单击配置->高级->自定义纱线站点->添加属性…

添加hdp版本作为关键和值作为您的hdp版本。您将获得hdp版本以及下面的命令

<<p> hdp-select版本/em>

。2.5.3.0-37

现在添加属性为

hdp.version = 2.5.3.0-37

  • 否则替换${hdp。yarn-site.xml和yarn-env.sh
  • 中的hdp版本(2.5.3.0-37)为version}

    我也有这个问题使用BigInsights 4.2.0.0与yarn, spark和mapreduce 2,是什么导致它是op.version。要解决这个问题,你必须添加iop。版本变量到mapred-site,这可以通过以下步骤完成:

    在Ambari Server进入:

    • MAPREDUCE2
    • 配置(选项卡)
    • 高级(tab)
    • 点击进入自定义地图站点
    • 添加属性…
    • 眼压。版本和BigInsights版本。
    • 重启所有服务

    这可能是由于/bin/sh链接到dash,而不是bash,这在基于Debian的系统中经常发生。

    要修复它,运行sudo dpkg-reconfigure dash并选择no

    最新更新