为什么spark在没有磁盘持久化或检查点的情况下也会将巨大的文件写入临时本地磁盘



我正在一个集群上运行一个小作业,每台机器有15G内存和8G磁盘。

作业总是进入死锁,最后一条错误消息是:

java.io.IOException: No space left on device
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:345)
    at org.apache.spark.storage.DiskBlockObjectWriter$TimeTrackingOutputStream$$anonfun$write$3.apply$mcV$sp(BlockObjectWriter.scala:86)
    at org.apache.spark.storage.DiskBlockObjectWriter.org$apache$spark$storage$DiskBlockObjectWriter$$callWithTiming(BlockObjectWriter.scala:221)
    at org.apache.spark.storage.DiskBlockObjectWriter$TimeTrackingOutputStream.write(BlockObjectWriter.scala:86)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
    at org.xerial.snappy.SnappyOutputStream.dumpOutput(SnappyOutputStream.java:300)
    at org.xerial.snappy.SnappyOutputStream.rawWrite(SnappyOutputStream.java:247)
    at org.xerial.snappy.SnappyOutputStream.write(SnappyOutputStream.java:107)
    at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1876)
    at java.io.ObjectOutputStream$BlockDataOutputStream.writeByte(ObjectOutputStream.java:1914)
    at java.io.ObjectOutputStream.writeFatalException(ObjectOutputStream.java:1575)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:350)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:42)
    at org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:195)
    at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4$$anonfun$apply$2.apply(ExternalSorter.scala:751)
    at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4$$anonfun$apply$2.apply(ExternalSorter.scala:750)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4.apply(ExternalSorter.scala:750)
    at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4.apply(ExternalSorter.scala:746)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at org.apache.spark.util.collection.ExternalSorter.writePartitionedFile(ExternalSorter.scala:746)
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:68)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:56)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

当这种情况发生时,无序写入大小为0.0B,输入大小为3.4MB。我想知道什么操作可以迅速占用整个5G可用磁盘空间。

此外,整个作业的存储级别被限制为MEMORY_ONLY_SERIALIZED,并且检查点被完全禁用。

如果您知道shuffle操作适合内存,可以尝试设置spark.shuffle溢出为false。(否则你会OOM)。在http://spark.apache.org/docs/latest/configuration.html您可以看到有关shuffle行为的选项和其他公共配置选项。

MEMORY_ONLY_SERIALIZED适用于RDD。

最新更新