FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster:



我是Hadoop的新手。

我正在尝试将Giraph设置为使用纱线在hadoop-2.6.5上运行。

当我提交 Giraph 作业时,作业成功提交但失败,我在容器系统日志中登录

2018-01-30 12:09:01,190 信息 [主要] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: create MRAppMaster 申请appattempt_1517293264136_0002_000002 2018-01-30 12:09:01,437 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 无法为您的平台加载本机 hadoop 库...用 内置Java类(如适用) 2018-01-30 12:09:01,471 信息 [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Execution with 代币: 2018-01-30 12:09:01,471 信息 [主要] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appTryId { application_id { id: 2 cluster_timestamp: 1517293264136 } tryId: 2 } keyId: -1485907628) 2018-01-30 12:09:01,583 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter.2018-01-30 12:09:02,154 信息 [主要] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in 配置 null 2018-01-30 12:09:02,207 致命 [主要] org.apache.hadoop.mapreduce.v2.app.MRAppMaster:启动时出错 MRAppMaster java.lang.NoClassDefFoundError: io/netty/buffer/ByteBufAllocator at org.apache.giraph.bsp.BspOutputFormat.getOutputCommitter(BspOutputFormat.java:62) 在 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.call(MRAppMaster.java:470) 在 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.call(MRAppMaster.java:452) 在 org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1541) 在 org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:452) 在 org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:371) 在 org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 在 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1499) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692) 在 org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1496) 在 org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1429) 原因:java.lang.ClassNotFoundException: io.netty.buffer.ByteBufAllocator at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ...还有 13 个 2018-01-30 12:09:02,209 信息 [main] org.apache.hadoop.util.ExitUtil: 以状态 1 退出

日志中的诊断显示以下日志:

应用程序application_1517293264136_0002因 AM 而失败 2 次 用于appattempt_1517293264136_0002_000002退出的容器 退出代码:1 有关更详细的输出,请检查应用程序跟踪 页面:http://172.16.0.218:8088/proxy/application_1517293264136_0002/Then, 单击指向每次尝试日志的链接。诊断:异常来自 集装箱启动。容器 ID:container_1517293264136_0002_02_000001 退出代码:1 堆栈跟踪:退出代码异常退出代码=1:at org.apache.hadoop.util.Shell.runCommand(Shell.java:575) at org.apache.hadoop.util.Shell.run(Shell.java:478) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:766) 在 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) 在 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) 在 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 在 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 容器退出,带有 非零退出代码 1 此尝试失败。应用程序失败。

它失败的类是io/netty/buffer/ByteBufAllocator在 netty-all jar 中:https://mvnrepository.com/artifact/io.netty/netty-all

从其他问题中,我尝试将罐子添加到HADOOP_CLASSPATH中。

Yogin-Patel:hadoop yoginpatel$ echo $HADOOP_CLASSPATH
/Users/yoginpatel/Downloads/gradle-4.3/caches/modules-2/files-2.1/io.netty/netty-all/4.0.43.Final/9781746a179070e886e1fb4b1971a6bbf02061a4/netty-all-4.0.43.Final.jar
Yogin-Patel:hadoop yoginpatel$ 

它也出现在hadoop classpath中。

Yogin-Patel:hadoop yoginpatel$ hadoop classpath
/Users/yoginpatel/hadoop/etc/hadoop:/Users/yoginpatel/hadoop/share/hadoop/common/lib/*:/Users/yoginpatel/hadoop/share/hadoop/common/*:/Users/yoginpatel/hadoop/share/hadoop/hdfs:/Users/yoginpatel/hadoop/share/hadoop/hdfs/lib/*:/Users/yoginpatel/hadoop/share/hadoop/hdfs/*:/Users/yoginpatel/hadoop/share/hadoop/yarn/lib/*:/Users/yoginpatel/hadoop/share/hadoop/yarn/*:/Users/yoginpatel/hadoop/share/hadoop/mapreduce/lib/*:/Users/yoginpatel/hadoop/share/hadoop/mapreduce/*:/Users/yoginpatel/Downloads/gradle-4.3/caches/modules-2/files-2.1/io.netty/netty-all/4.0.43.Final/9781746a179070e886e1fb4b1971a6bbf02061a4/netty-all-4.0.43.Final.jar:/contrib/capacity-scheduler/*.jar
Yogin-Patel:hadoop yoginpatel$ 

我正在尝试在开发环境中进行设置。这是单节点设置。

我什至尝试过

job.addFileToClassPath(new Path("/Users/yoginpatel/Downloads/gradle-4.3/caches/modules-2/files-2.1/io.netty/netty-all/4.0.43.Final/9781746a179070e886e1fb4b1971a6bbf02061a4/netty-all-4.0.43.Final.jar"));

这些方法都没有帮助。如何让 hadoop 节点访问必要的 jar?

这是一个GiraphJob提交代码,它将向集群提交mapreduce作业:

@Test
public void testPageRank() throws IOException, ClassNotFoundException, InterruptedException {
GiraphConfiguration giraphConf = new GiraphConfiguration(getConf());
giraphConf.setWorkerConfiguration(1,1,100);
GiraphConstants.SPLIT_MASTER_WORKER.set(giraphConf, false);
giraphConf.setVertexInputFormatClass(JsonLongDoubleFloatDoubleVertexInputFormat.class);
GiraphFileInputFormat.setVertexInputPath(giraphConf,
new Path("/input/tiny-graph.txt"));
giraphConf.setVertexOutputFormatClass(IdWithValueTextOutputFormat.class);
giraphConf.setComputationClass(PageRankComputation.class);
GiraphJob giraphJob = new GiraphJob(giraphConf, "page-rank");
giraphJob.getInternalJob().addFileToClassPath(new Path("/Users/yoginpatel/Downloads/gradle-4.3/caches/modules-2/files-2.1/io.netty/netty-all/4.0.43.Final/9781746a179070e886e1fb4b1971a6bbf02061a4/netty-all-4.0.43.Final.jar"));
FileOutputFormat.setOutputPath(giraphJob.getInternalJob(),
new Path("/output/page-rank2"));
giraphJob.run(true);
}
private Configuration getConf() {
Configuration conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://localhost:9000");
conf.set("yarn.resourcemanager.address", "localhost:8032");
// framework is now "yarn", should be defined like this in mapred-site.xm
conf.set("mapreduce.framework.name", "yarn");
return conf;
}

我通过将带有依赖项的 giraph jar 放在 hadooplib 路径中来让它工作:

cp giraph-1.3.0-SNAPSHOT-for-hadoop-2.6.5-jar-with-dependencies.jar ~/hadoop/share/hadoop/mapreduce/lib/

最新更新