Artifacts (.jar) 使用 Intellij Idea 生成 Scala Spark 抛出 ClassNot



我已经浏览了各种相关建议,但我仍在挣扎。

我有什么:

Spark:2.3.1
Scala:2.11.8
OS: Windows 10
IDE:Intellij Idea
Code:
package: testpackage
merge.scala -->has do() method
mymain.scala -->has main() method and inside that merge.do()
Project Settings--Artifacts-->
Main Class-->testpackage.mymain
Class Path--><blank>

什么是有效的:

1. Running properly on IDE
2. JAR creation : Artifacts are getting generated properly as testpackage.jar
3. I can see the classes(along with various other libraries) when I open testpackage.jar in winrar as:
testpackagemerge$$anonfun$1.class
testpackagemerge$$anonfun$2.class
testpackagemerge$.class
testpackagemerge.class
testpackagemymain$.class
testpackagemymain.class

什么不起作用

spark-submit from command prompt throws exception:
java.lang.ClassNotFoundException: testpackage.mymain
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.util.Utils$.classForName(Utils.scala:238)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:851)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

更多信息:

1. spark-submit executed from jar folder (~outartifactstestpackage_jar)
2. the testpackage.jar is about 128MB

如果我尝试,同样的例外:

spark-submit testpackage.jar C:tempinput.csv
spark-submit --class testpackage.mymain testpackage.jar C:tempinput.csv
spark-submit --class mymain testpackage.jar C:tempinput.csv

还尝试在build.sbt中使用以下语句

mainClass in (Compile, packageBin) := Some("testpackage.mymain")

还尝试在build.sbt中使用以下语句

mainClass in (Compile, packageBin) := Some("mymain")

也尝试通过将jar保存在Spark bin文件夹中,在我的机器中它是

C:Sparkspark-2.3.1-bin-hadoop2.7bin

尝试使用 --mater local[*] 和其他一些组合。

感谢您的帮助!!

终于找到了解决方案。以防万一其他人撞到了这一点。确保只有 1 个条目,即项目结构>--工件.jar>输出布局上"您的包"下的"'您的包'编译输出">

这很奇怪,我仍然不知道为什么它不起作用!

现在我的罐子大小是10KB,效果很好!:)

最新更新