Spark Scala应用不使用SBT在Eclipse中运行



i,我遵循以下链接,并使用SBT Eclpise插件在Eclipse中创建了Spark Scala应用程序。

https://www.nodalpoint.com/development-and-deployment-of-spark-applications-with-scala-eclipse-cala-eclipse-and-sbt-part-part-part-part-part-part-part-part-1-installation-configuration/

遵循所有步骤,并能够使用SBT运行SampleApp。但是,当我将应用程序移植到Eclipse时,我将无法这样运行该应用程序。但是可以使用Scala解释器逐线运行。以下是我在运行应用程序时遇到的错误。对出了什么问题有任何想法吗?

Using Spark's default log4j profile: org/apache/spark/log4j-
defaults.properties
17/09/12 22:27:55 INFO SparkContext: Running Spark version 1.6.0
17/09/12 22:27:56 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
17/09/12 22:27:56 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: A master URL must be set in your 
configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:401)
at TowerLocator$.main(TowerLocator.scala:11)
at TowerLocator.main(TowerLocator.scala)
17/09/12 22:27:56 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: A master URL 
must be set in your configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:401)
at TowerLocator$.main(TowerLocator.scala:11)
at TowerLocator.main(TowerLocator.scala)

谢谢

从eclipse启动应用程序时,必须指定主URL。

val conf = new SparkConf().setAppName("Sample Application").setMaster("local[*]")

从Shell启动时,您可以使用--master参数

指定它。

相关内容

  • 没有找到相关文章

最新更新