获取 java.lang.OutOfMemoryError: 提交映射减少时超出了 GC 开销限制



提交地图时获取以下消息 减少.我用 -XX:MaxPermSize=128m 内存大小启动了我的地图缩减程序。

有没有人知道现在发生了什么——

17/03/24 09:58:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 1160328 for svc_pffr on ha-hdfs:nameservice3
    17/03/24 09:58:46 ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
    17/03/24 09:58:46 INFO security.TokenCache: Got dt for hdfs://nameservice3; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice3, Ident: (HDFS_DELEGATION_TOKEN token 1160328 for svc_pffr)
    17/03/24 09:58:46 ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
    17/03/24 09:58:46 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
    17/03/24 09:58:47 ERROR hdfs.KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
    17/03/24 10:01:55 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/svc_pffr/.staging/job_1489708003568_5870
    Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
            at org.apache.hadoop.security.token.Token.<init>(Token.java:85)
            at org.apache.hadoop.hdfs.protocol.LocatedBlock.<init>(LocatedBlock.java:52)
            at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:755)
            at org.apache.hadoop.hdfs.protocolPB.PBHelper.convertLocatedBlock(PBHelper.java:1174)
            at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1192)
            at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1328)
            at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1436)
            at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1445)
            at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:549)
            at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:606)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
            at com.sun.proxy.$Proxy23.getListing(Unknown Source)
            at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1893)
            at org.apache.hadoop.hdfs.DistributedFileSystem$15.<init>(DistributedFileSystem.java:742)
            at org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:731)
            at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1664)
            at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:300)
            at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
            at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:59)
            at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
            at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:589)
            at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:606)
            at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:490)
            at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
            at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:415)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
            at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)

您需要增加客户端堆大小。

你能试试吗:-导出HADOOP_CLIENT_OPTS="$HADOOP_客户端_OPTS -Xmx1g"。

然后在没有XX:MaxPermSize=128m的情况下开始你的Hadoop执行

您也可以查看此链接.. http://community.cloudera.com/t5/Batch-Processing-and-Workflow/cloudera-hadoop-mapreduce-job-GC-overhead-limit-exceeded-error/td-p/14754

我有这个例外,我只是格式化了我的 hdfs,因为它已经饱和了!

$ hadoop namenode -format

请注意:如果您格式化hdfs,您将丢失与数据节点相关的所有元数据,因此数据节点上的所有信息都将丢失!

实际上就

我而言 - 我获得了 6000 个输入路径,我的 map-reduce 程序将创建至少 6000 个映射器,因此我在提交时出现内存不足异常。有什么好的方法可以解决此类问题。

我知道

这听起来很奇怪,但我有同样的错误,我已经找到了原因。对我来说,是Cloudera Management Service内部的firehose_non_java_memory_bytes设置得太高了。

我不知道这是否会解决您的问题,可能值得一试。

相关内容

最新更新