失败:执行错误,从org.apache.hadoop.hive.ql.exec.spark.SparkTask返回代码2



我正在运行

  • apache-hive.2.1-bin
  • hadoop-2.7.1
  • spark-1.5.1-在hadoop2.6

我可以在Spark上配置配置单元,但当我尝试执行查询时,它会给我以下错误消息。

hive> SELECT COUNT(*) AS rcount, yom From service GROUP BY yom;
Query ID = hduser_20160110105649_4c90528a-76ba-4127-8849-54f2152be817
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Spark Job = b9cbbd47-f41f-48b5-98c3-efcaa145390e
Status: SENT
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask

我该怎么解决这个问题?

我也有同样的问题,但由于一些作业正在运行,我没有配置yarn。我不确定这是否是问题的解决办法。

yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler

你有没有像文件中说的那样配置纱线?

yarn-site.xml:

<property>
    <name>yarn.resourcemanager.scheduler.class</name>
   <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>

最新更新