MapReduce job被YARN接受后失败



即使是简单的WordCount mapreduce也会出现同样的错误。

Hadoop 2.6.0

下面是Yarn日志。

似乎在资源协商过程中发生了某种超时。
但我无法验证相同的,究竟是什么原因导致超时

2016-11-11 15:38:09,313 INFOorg.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher:启动appattempt_1478856936677_0004_000002时出错。有例外:java.io.IOException: Failed on local exception:sockettimeoutexception: 60000毫秒等待超时为通道准备好读取。ch:java.nio.channels。SocketChannel(本地=/10.0.37.145:49054相连远程= platform-demo/10.0.37.145:60487];主机详细信息:本地主机为:"platform-demo/10.0.37.145";目标主机为:"platform-demo":60487;org.apache.hadoop.net.NetUtils.wrapException (NetUtils.java: 772)org.apache.hadoop.ipc.Client.call (Client.java: 1472)org.apache.hadoop.ipc.Client.call (Client.java: 1399)org.apache.hadoop.ipc.ProtobufRpcEngine Invoker.invoke美元(ProtobufRpcEngine.java: 232)在com.sun.proxy。Proxy79美元。startContainers(未知来源)org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers (ContainerManagementProtocolPBClientImpl.java: 96)org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch (AMLauncher.java: 119)org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run (AMLauncher.java: 254)java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java: 1145)java.util.concurrent.ThreadPoolExecutor Worker.run美元(ThreadPoolExecutor.java: 615)java.lang.Thread.run(Thread.java:745)由:java.io.IOException: java.net.SocketTimeoutException: 60000毫秒导致等待通道准备好读取时超时。ch:java.nio.channels。SocketChannel(本地=/10.0.37.145:49054相连远程= platform-demo/10.0.37.145:60487]在org.apache.hadoop.ipc.Client连接1.美元运行(Client.java: 680)在java.security.AccessController。doPrivileged(本地方法)javax.security.auth.Subject.doAs (Subject.java: 415)org.apache.hadoop.security.UserGroupInformation.doAs (UserGroupInformation.java: 1628)org.apache.hadoop.ipc.Client Connection.handleSaslConnectionFailure美元(Client.java: 643)org.apache.hadoop.ipc.Client Connection.setupIOstreams美元(Client.java: 730)在org.apache.hadoop.ipc.Client Connection.access 2800美元(Client.java: 368)org.apache.hadoop.ipc.Client.getConnection (Client.java: 1521)org.apache.hadoop.ipc.Client.call (Client.java: 1438)…由于:java.net.SocketTimeoutException: 60000毫秒超时,等待通道准备好读取。ch:java.nio.channels。SocketChannel(本地=/10.0.37.145:49054相连远程= platform-demo/10.0.37.145:60487]org.apache.hadoop.net.SocketIOWithTimeout.doIO (SocketIOWithTimeout.java: 164)org.apache.hadoop.net.SocketInputStream.read (SocketInputStream.java: 161)org.apache.hadoop.net.SocketInputStream.read (SocketInputStream.java: 131)java.io.FilterInputStream.read (FilterInputStream.java: 133)java.io.BufferedInputStream.fill (BufferedInputStream.java: 235)java.io.BufferedInputStream.read (BufferedInputStream.java: 254)java.io.DataInputStream.readInt (DataInputStream.java: 387)org.apache.hadoop.security.SaslRpcClient.saslConnect (SaslRpcClient.java: 367)org.apache.hadoop.ipc.Client Connection.setupSaslConnection美元(Client.java: 553)在org.apache.hadoop.ipc.Client Connection.access 1800美元(Client.java: 368)在org.apache.hadoop.ipc.Client连接2.美元运行(Client.java: 722)在org.apache.hadoop.ipc.Client连接2.美元运行(Client.java: 718)在java.security.AccessController。doPrivileged(本地方法)javax.security.auth.Subject.doAs (Subject.java: 415)org.apache.hadoop.security.UserGroupInformation.doAs (UserGroupInformation.java: 1628)org.apache.hadoop.ipc.Client Connection.setupIOstreams美元(Client.java: 717)…12

2016-11-11 15:38:09,319 INFOorg.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:更新应用程序尝试appattempt_1478856936677_0004_000002最终状态:FAILED,退出状态:-1000 2016-11-11 15:38:09,319信息org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:appattempt_1478856936677_0004_000002状态从已分配变为FINAL_SAVING

我试图改变下面的属性

yarn.nodemanager.resource。memory- MB
2200物理内存量,单位为MB。可以分配给容器。

yarn.scheduler。minimum-allocation-mb
500

dfs.datanode.socket.write。超时
3000000

dfs.socket.timeout 3000000

MapReduce job被YARN接受后失败

原因,多个连接大约130卡在端口60487。

Q2。MapReduce job被YARN接受后失败

问题是由于hadoop tmp/app/hadoop/tmp。清空此目录并重新尝试MAPR作业,作业已成功执行。

第三季。

编辑yarn-site.xml。

<property>
<name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
<value>98.5</value>
</property>

参考为什么Hadoop报告"不健康的节点local-dirs和log-dirs是坏的"?

相关内容

  • 没有找到相关文章

最新更新