运行MapReduce程序时"Java Heap Space:OutOfMemoryError"



我已经将整个文件夹作为MR作业的输入。

我使用CombineFileBinaryInputFormat(扩展了CombineFileInputFormat)作为MR作业的输入格式。我在CombineFileBinaryInputFormat构造函数中使用了"setMaxSplitSize(262144000)"这个方法,因为我的块大小是250MB。文件的拆分是按数据包进行的,我应该在某个地方进行检查,以测试限制是否超过了250MB,或者是隐含的。完整的代码可在此处获得。

但是我在运行MapReduce程序时遇到了"Java堆空间"错误。

以下是代码的一部分供参考:

public class CombineBinaryInputFormat extends CombineFileInputFormat<KeyWritable, ValueWritable>{
public CombineBinaryInputFormat(){
super();
setMaxSplitSize(262144000); 
}
My StackTrace:
==============
15/05/05 11:52:47 INFO input.FileInputFormat: Total input paths to process : 318
15/05/05 11:52:47 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 52027734
15/05/05 11:52:47 INFO mapreduce.JobSubmitter: number of splits:1
15/05/05 11:52:47 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local634564612_0001
15/05/05 11:52:47 WARN conf.Configuration: file:/app/hadoop/tmp/mapred/staging/raghuveer634564612/.staging/job_local634564612_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
15/05/05 11:52:47 WARN conf.Configuration: file:/app/hadoop/tmp/mapred/staging/raghuveer634564612/.staging/job_local634564612_0001/job.xml:an attempt to override final parameter: mapreduce.job.
end-notification.max.attempts;  Ignoring.
15/05/05 11:52:48 WARN conf.Configuration: file:/var/hadoop/mapreduce/localRunner/raghuveer/job_local634564612_0001/job_local634564612_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
15/05/05 11:52:48 WARN conf.Configuration: file:/var/hadoop/mapreduce/localRunner/raghuveer/job_local634564612_0001/job_local634564612_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
15/05/05 11:52:48 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
15/05/05 11:52:48 INFO mapreduce.Job: Running job: job_local634564612_0001
15/05/05 11:52:48 INFO mapred.LocalJobRunner: OutputCommitter set in config null
15/05/05 11:52:48 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
15/05/05 11:52:48 INFO mapred.LocalJobRunner: Waiting for map tasks
15/05/05 11:52:48 INFO mapred.LocalJobRunner: Starting task: attempt_local634564612_0001_m_000000_0
15/05/05 11:52:48 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
15/05/05 11:52:48 INFO mapred.MapTask: Processing split: Paths:/user/usr/local/upload/20120713T07-45-42.682358000Z_79.150.138.86-1412.c2s_ndttrace:0+78550,/user/usr/local/upload/20120713T07-45-43.356723000Z_151.40.240.66-53426.c2s_ndttrace:0+32768,/user/usr/local/upload/20120713T07-45-43.718556000Z_85.26.235.102-25300.c2s_ndttrace:0+10130,/user/usr/local/upload
.....
.....
.....
/20120713T08-33-41.259331000Z_84.122.129.103-61321.c2s_ndttrace:0+19148,/user/usr/local/upload/20120713T08-33-54.972649000Z_86.69.144.214-49599.c2s_ndttrace:0+63014,/user/usr/local/upload/20120713T08-33-56.162340000Z_41.143.91.156-50785.c2s_ndttrace:0+13658,/user/usr/local/upload/20120713T08-33-59.768261000Z_31.187.12.141-50274.c2s_ndttrace:0+126542,/user/usr/local/upload/20120713T08-34-03.950055000Z_78.119.172.109-51495.c2s_ndttrace:0+92676,/user/usr/local/upload/20120713T08-34-08.378534000Z_87.7.113.115-62238.c2s_ndttrace:0+49410,/user/usr/local/upload/20120713T08-34-26.258570000Z_151.13.227.66-33198.c2s_ndttrace:0+2666092
15/05/05 11:52:49 INFO mapreduce.Job: Job job_local634564612_0001 running in uber mode : false
15/05/05 11:52:49 INFO mapreduce.Job:  map 0% reduce 0%
15/05/05 11:52:50 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
15/05/05 11:52:53 INFO mapred.MapTask: (EQUATOR) 0 kvi 78643196(314572784)
15/05/05 11:52:53 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 300
15/05/05 11:52:53 INFO mapred.MapTask: soft limit at 251658240
15/05/05 11:52:53 INFO mapred.MapTask: bufstart = 0; bufvoid = 314572800
15/05/05 11:52:53 INFO mapred.MapTask: kvstart = 78643196; length = 19660800
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (82) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (82) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload.
15/05/05 11:52:55 INFO mapred.MapTask: Starting flush of map output
15/05/05 11:52:55 INFO mapred.MapTask: Spilling map output
15/05/05 11:52:55 INFO mapred.MapTask: bufstart = 0; bufend = 105296; bufvoid = 314572800
15/05/05 11:52:55 INFO mapred.MapTask: kvstart = 78643196(314572784); kvend = 78637988(314551952); length = 5209/19660800
15/05/05 11:52:55 INFO mapred.LocalJobRunner: map > map
15/05/05 11:52:55 INFO mapred.MapTask: Finished spill 0
15/05/05 11:52:55 INFO mapred.LocalJobRunner: map task executor complete.
15/05/05 11:52:55 WARN mapred.LocalJobRunner: job_local634564612_0001
java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.OutOfMemoryError: Java heap space
at net.ripe.hadoop.pcap.PcapReader.nextPacket(PcapReader.java:208)
at net.ripe.hadoop.pcap.PcapReader.access$0(PcapReader.java:173)
at net.ripe.hadoop.pcap.PcapReader$PacketIterator.fetchNext(PcapReader.java:554)
at net.ripe.hadoop.pcap.PcapReader$PacketIterator.hasNext(PcapReader.java:559)
at net.ripe.hadoop.pcap.io.reader.PcapRecordReader.nextKeyValue(PcapRecordReader.java:57)
at net.ripe.hadoop.pcap.io.reader.CombineBinaryRecordReader.nextKeyValue(CombineBinaryRecordReader.java:42)
at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:69)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:533)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
15/05/05 11:52:56 INFO mapreduce.Job: Job job_local634564612_0001 failed with state FAILED due to: NA
15/05/05 11:52:56 INFO mapreduce.Job: Counters: 25
File System Counters
FILE: Number of bytes read=29002348
FILE: Number of bytes written=29450636
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=103142
HDFS: Number of bytes written=0
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=1
Map-Reduce Framework
Map input records=1303
Map output records=1303
Map output bytes=105296
Map output materialized bytes=0
Input split bytes=38078
Combine input records=0
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=593
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=1745092608
File Input Format Counters 
Bytes Read=0

在这里,我将数百个文件作为输入发送到MapReduce作业,我使用默认的块大小,即64MB,我的RAM大小是4GB,我在32位系统上使用hadoop。现在,我面临Java堆空间错误。如果我给MR作业提供数百个文件作为输入,块大小为64MB,并使用CombineFileInputFormat和4GB RAM,有什么解决方案可以克服这个问题吗。

请就这个问题向我提出建议。。。

就逻辑而言。。。拆分大小永远不会导致Java堆空间错误。

它必须对我们的代码逻辑做一些事情,比如,对于一个给定的键,在内存中聚合了太多的数据。

你能提供stackTrace进行进一步分析吗?

最新更新