如何构建一个真正的本地Apache Spark "fat"罐子。JRE 内存问题?



长话短说:我有一个应用程序,它使用Spark DataFrames和Machine Learning,而Scalafx则用于前端。我想创建一个庞大的"脂肪"罐,以便它在JVM的任何机器中运行。

我已经研究了组装罐子数小时的方法,我熟悉了SBT插件。以下是我的build.sbt:

lazy val root = (project in file(".")).
  settings(
  scalaVersion := "2.11.8",
  mainClass in assembly := Some("me.projects.MyProject.Main"),
  assemblyJarName in assembly := "MyProject_2.0.jar",
  test in assembly := {}
  )
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0" withSources() withJavadoc()
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0" withSources() withJavadoc()
libraryDependencies += "org.apache.spark" %% "spark-mllib" % "2.0.2" withSources() withJavadoc()
libraryDependencies += "joda-time" % "joda-time" % "2.9.4" withJavadoc()
libraryDependencies += "org.scalactic" %% "scalactic" % "3.0.1" % "provided"
libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.1" % "test"
libraryDependencies += "org.scalafx" %% "scalafx" % "8.0.92-R10" withSources() withJavadoc()
libraryDependencies += "net.liftweb" %% "lift-json" % "2.6+" withSources() withJavadoc()
EclipseKeys.withSource := true
EclipseKeys.withJavadoc := true
// META-INF discarding
assemblyMergeStrategy in assembly := {
  case PathList("org","aopalliance", xs @ _*) => MergeStrategy.last
  case PathList("javax", "inject", xs @ _*) => MergeStrategy.last
  case PathList("javax", "servlet", xs @ _*) => MergeStrategy.last
  case PathList("javax", "activation", xs @ _*) => MergeStrategy.last
  case PathList("org", "apache", xs @ _*) => MergeStrategy.last
  case PathList("com", "google", xs @ _*) => MergeStrategy.last
  case PathList("com", "esotericsoftware", xs @ _*) => MergeStrategy.last
  case PathList("com", "codahale", xs @ _*) => MergeStrategy.last
  case PathList("com", "yammer", xs @ _*) => MergeStrategy.last
  case "about.html" => MergeStrategy.rename
  case "META-INF/ECLIPSEF.RSA" => MergeStrategy.last
  case "META-INF/mailcap" => MergeStrategy.last
  case "META-INF/mimetypes.default" => MergeStrategy.last
  case "plugin.properties" => MergeStrategy.last
  case "log4j.properties" => MergeStrategy.last
  case x =>
    val oldStrategy = (assemblyMergeStrategy in assembly).value
    oldStrategy(x)
}

这在我的Linux机器上运行良好,该机器已经安装和配置了火花。在我服用Scalafx之前,罐子组装了罐子,并在没有问题的Windows机器中打开它们。但是,此应用程序也使用Spark,提供以下内容:

ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: System memory 259522560 must be at least 471859200. Please increase the heap size using the --driver-memory option or spark.driver.memory in Spark configuration.

我尝试过的事情:

  • 在构建SBT中的火花依赖项中包括/不包括%"提供"
  • 在Windows的计算机Java运行时环境设置中,在-XMS,运行时参数中添加越来越大的数字。
  • 在创建SparkConf(在Scala代码中)时,为Spark.executor.driver/Memory设置不同的值,例如:

    .set(" spark.executor.memory"," 12g").set(" spark.executor.driver"," 5G").set(" spark.driver.memory"," 5g")

该应用程序否则可以正常工作(在Scala IDE中运行时,使用Spark-Submit运行时,在Linux中打开组装JAR时)。

请告诉我是否可能。这是一个小型项目,该项目使用GUI(Scalafx)在某些数据(Spark)上运行几个机器学习操作。因此,上面的依赖项。

再次,我不想设置群集或其他任何内容。我想仅通过使用JRE上的任何计算机上的罐子来访问SPARK功能。这个小型项目。

在声明SparkConf使用.set("spark.driver.memory","5g")尝试时。当然,如果您的机器上有5 g的内存。

事实证明这是一个相当通用的JVM问题。我不仅添加运行时参数,还通过向Windows系统添加一个新的环境变量来解决此问题:

名称:_JAVA_OPTIONS值:-Xms512M -Xmx1024M

最新更新