Apache Spark s3a提交程序-线程堆栈-内存不足问题



我真的需要一些帮助:

我们使用Spark3.1.2使用独立集群。自从我们开始使用s3a目录提交器以来,我们的spark作业稳定性和性能显著提高!

然而,最近几天,我们对解决s3a目录提交器问题完全感到困惑,不知道你是否知道发生了什么?

我们的spark作业失败是因为Java OOM(或者更确切地说是进程限制(错误:

An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
at java.base/java.lang.Thread.start0(Native Method)
at java.base/java.lang.Thread.start(Thread.java:803)
at java.base/java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:937)
at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1343)
at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118)
at java.base/java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:714)
at org.apache.spark.rpc.netty.DedicatedMessageLoop.$anonfun$new$1(MessageLoop.scala:174)
at org.apache.spark.rpc.netty.DedicatedMessageLoop.$anonfun$new$1$adapted(MessageLoop.scala:173)
at scala.collection.immutable.Range.foreach(Range.scala:158)
at org.apache.spark.rpc.netty.DedicatedMessageLoop.<init>(MessageLoop.scala:173)
at org.apache.spark.rpc.netty.Dispatcher.liftedTree1$1(Dispatcher.scala:75)
at org.apache.spark.rpc.netty.Dispatcher.registerRpcEndpoint(Dispatcher.scala:72)
at org.apache.spark.rpc.netty.NettyRpcEnv.setupEndpoint(NettyRpcEnv.scala:136)
at org.apache.spark.storage.BlockManager.<init>(BlockManager.scala:231)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:394)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:189)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:458)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:834)

Spark Thread Dump在Spark驱动程序上显示了5000多个提交程序线程!这里有一个例子:

Thread ID   Thread Name Thread State    Thread Locks
1047    s3-committer-pool-0 WAITING 
1449    s3-committer-pool-0 WAITING 
1468    s3-committer-pool-0 WAITING 
1485    s3-committer-pool-0 WAITING 
1505    s3-committer-pool-0 WAITING 
1524    s3-committer-pool-0 WAITING 
1529    s3-committer-pool-0 WAITING 
1544    s3-committer-pool-0 WAITING 
1549    s3-committer-pool-0 WAITING 
1809    s3-committer-pool-0 WAITING 
1972    s3-committer-pool-0 WAITING 
1998    s3-committer-pool-0 WAITING 
2022    s3-committer-pool-0 WAITING 
2043    s3-committer-pool-0 WAITING 
2416    s3-committer-pool-0 WAITING 
2453    s3-committer-pool-0 WAITING 
2470    s3-committer-pool-0 WAITING 
2517    s3-committer-pool-0 WAITING 
2534    s3-committer-pool-0 WAITING 
2551    s3-committer-pool-0 WAITING 
2580    s3-committer-pool-0 WAITING 
2597    s3-committer-pool-0 WAITING 
2614    s3-committer-pool-0 WAITING 
2631    s3-committer-pool-0 WAITING 
2726    s3-committer-pool-0 WAITING 
2743    s3-committer-pool-0 WAITING 
2763    s3-committer-pool-0 WAITING 
2780    s3-committer-pool-0 WAITING 
2819    s3-committer-pool-0 WAITING 
2841    s3-committer-pool-0 WAITING 
2858    s3-committer-pool-0 WAITING 
2875    s3-committer-pool-0 WAITING 
2925    s3-committer-pool-0 WAITING 
2942    s3-committer-pool-0 WAITING 
2963    s3-committer-pool-0 WAITING 
2980    s3-committer-pool-0 WAITING 
3020    s3-committer-pool-0 WAITING 
3037    s3-committer-pool-0 WAITING 
3055    s3-committer-pool-0 WAITING 
3072    s3-committer-pool-0 WAITING 
3127    s3-committer-pool-0 WAITING 
3144    s3-committer-pool-0 WAITING 
3163    s3-committer-pool-0 WAITING 
3180    s3-committer-pool-0 WAITING 
3222    s3-committer-pool-0 WAITING 
3242    s3-committer-pool-0 WAITING 
3259    s3-committer-pool-0 WAITING 
3278    s3-committer-pool-0 WAITING 
3418    s3-committer-pool-0 WAITING 
3435    s3-committer-pool-0 WAITING 
3452    s3-committer-pool-0 WAITING 
3469    s3-committer-pool-0 WAITING 
3486    s3-committer-pool-0 WAITING 
3491    s3-committer-pool-0 WAITING 
3501    s3-committer-pool-0 WAITING 
3508    s3-committer-pool-0 WAITING 
4029    s3-committer-pool-0 WAITING 
4093    s3-committer-pool-0 WAITING 
4658    s3-committer-pool-0 WAITING 
4666    s3-committer-pool-0 WAITING 
4907    s3-committer-pool-0 WAITING 
5102    s3-committer-pool-0 WAITING 
5119    s3-committer-pool-0 WAITING 
5158    s3-committer-pool-0 WAITING 
5175    s3-committer-pool-0 WAITING 
5192    s3-committer-pool-0 WAITING 
5209    s3-committer-pool-0 WAITING 
5226    s3-committer-pool-0 WAITING 
5395    s3-committer-pool-0 WAITING 
5634    s3-committer-pool-0 WAITING 
5651    s3-committer-pool-0 WAITING 
5668    s3-committer-pool-0 WAITING 
5685    s3-committer-pool-0 WAITING 
5702    s3-committer-pool-0 WAITING 
5722    s3-committer-pool-0 WAITING 
5739    s3-committer-pool-0 WAITING 
6144    s3-committer-pool-0 WAITING 
6167    s3-committer-pool-0 WAITING 
6289    s3-committer-pool-0 WAITING 
6588    s3-committer-pool-0 WAITING 
6628    s3-committer-pool-0 WAITING 
6645    s3-committer-pool-0 WAITING 
6662    s3-committer-pool-0 WAITING 
6675    s3-committer-pool-0 WAITING 
6692    s3-committer-pool-0 WAITING 
6709    s3-committer-pool-0 WAITING 
7049    s3-committer-pool-0 WAITING 

这是考虑到我们的设置不允许超过100个线程…或者我们不明白什么…

以下是我们的配置和设置:

fs.s3a.threads.max  100 
fs.s3a.connection.maximum  1000 
fs.s3a.committer.threads 16   
fs.s3a.max.total.tasks  5
fs.s3a.committer.name   directory
fs.s3a.fast.upload.buffer                 disk
io.file.buffer.size                                1048576
mapreduce.outputcommitter.factory.scheme.s3a    - org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory

我们尝试过不同版本的spark Hadoop云库,但问题始终相同。

https://repository.cloudera.com/content/repositories/releases/org/apache/spark/spark-hadoop-cloud_2.11/2.4.0-cdh6.3.2/spark-hadoop-cloud_2.11-2.4.0-cdh6.3.2.jar

https://repository.cloudera.com/artifactory/libs-release-local/org/apache/spark/spark-hadoop-cloud_2.11/2.4.0.7.0.3.0-79/spark-hadoop-cloud_2.11-2.4.0.7.0.3.0-79.jar

https://repo1.maven.org/maven2/org/apache/spark/spark-hadoop-cloud_2.12/3.2.0/spark-hadoop-cloud_2.12-3.2.0.jar

https://repository.cloudera.com/artifactory/libs-release-local/org/apache/spark/spark-hadoop-cloud_2.12/3.1.2.7.2.12.0-291/spark-hadoop-cloud_2.12-3.1.2.7.2.12.0-291.jar

如果你能给我们指明正确的方向,我们将不胜感激😊

感谢您抽出时间!

它将是HADOOP-16570S3A提交程序在大规模的作业/任务提交中泄漏线程/引发OOM

升级到hadoop-3.3.0二进制文件进行修复。理想情况下,到3.3.1来修复一些其他问题,特别是来自spark的重复作业ID。不确定CDH发布该修复程序有多远;如果你现在真的需要,我可以锻炼。不是CDH6.x,当然是

查看这篇关于S3A调优的文章。

具体来说,我会看看这个,尽管它被分类在数组下:

您可能需要进行仔细的调整,以降低内存耗尽的风险,尤其是当数据在内存中缓冲时。有一个可调谐的数字参数:

文件系统中可用于数据的线程总数上传或任何其他排队的文件系统操作。这是在fs.s3a.threads.max.

可以排队等待执行的操作数,等待线这是在fs.s3a.max.total.tasks.中设置的

单个输出流可以激活的块数(也就是说,由线程上载或在文件系统中排队线程队列(。这是在fs.s3a.fast.upload.active.blocks.中设置的

空闲线程可以在线程池中停留的时间长度在它退役之前。这是在fs.s3a.threads.keepalivetime.中设置的

我想您可能会发现减少线程数量可以消除内存压力。

我还建议你调整fs.s3a.fast.upload.active.blocks,它也会减轻记忆压力。我认为减少线程数应该是你的第一步,因为100有点激进。您可能会受到带宽的限制,额外的线程除了消耗内存之外不太可能做任何事情。