火花驱动器堆内存问题



我看到了我在主节点上慢慢用尽Java堆的问题。以下是我创建的一个简单的示例,它仅重复200次。大约1小时内,主人的设置在内存中耗尽,并带有以下错误:

16/12/15 17:55:46 INFO YarnSchedulerBackend$YarnDriverEndpoint: Launching task 97578 on executor id: 9 hostname: ip-xxx-xxx-xx-xx
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="kill -9 %p"
#   Executing /bin/sh -c "kill -9 20160"...

代码:

import org.apache.spark.sql.functions._
import org.apache.spark._
object MemTest {
 case class X(colval: Long, colname: Long, ID: Long)
 def main(args: Array[String]) {
    val conf = new SparkConf().setAppName("MemTest")
    val spark = new SparkContext(conf)
    val sc = org.apache.spark.sql.SQLContext.getOrCreate(spark)
    import sc.implicits._;
    for( a <- 1 to 200)
    {
      var df = spark.parallelize((1 to 5000000).map(x => X(x.toLong, x.toLong % 10, x.toLong / 10 ))).toDF()
      df = df.groupBy("ID").pivot("colname").agg(max("colval"))
      df.count
    }
    spark.stop()
  }
}

我使用m4.xlarge(4个节点 1个主)在AWS EMR-5.1.0上运行。这是我的火花设置

{
  "Classification": "spark-defaults",
  "Properties": {
    "spark.dynamicAllocation.enabled": "false",
    "spark.executor.instances": "16",
    "spark.executor.memory": "2560m",
    "spark.driver.memory": "768m",
    "spark.executor.cores": "1"
  }
},
{
    "Classification": "spark",
    "Properties": {
      "maximizeResourceAllocation": "false"
    }
},

i使用

与SBT编译
name := "Simple Project"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "2.0.2" % "provided",
  "org.apache.spark" %% "spark-sql" % "2.0.2")

然后使用

运行它
spark-submit --class MemTest target/scala-2.11/simple-project_2.11-1.0.jar

使用jmap -histo查看内存,我看到java.lang.Longscala.Tuple2继续生长。

您确定群集上安装的火花版本是2.0.2吗?

或群集上有几个火花安装,您确定要调用正确的(2.0.2)Spark-Submit?

(不幸的是,我无法发表评论,所以这就是我将其发布为答案的原因)

最新更新