ClassNotFoundException: Caused by: java.lang.ClassNotFoundEx



我正在尝试创建一个程序集jar可执行文件,但收到以下错误

Caused by: java.lang.ClassNotFoundException: csv.DefaultSource

问题出在读取的 CSV 文件上。代码在 IDE 中工作正常。请帮助我

Scala 代码如下

package extendedtable
import org.apache.log4j.{Level, Logger}
import org.apache.spark.SparkContext
import org.apache.spark.sql.{DataFrame, Row, SparkSession}
import scala.collection.mutable.ListBuffer
object mainObject {
// var read = new fileRead
def main(args: Array[String]): Unit = {
val spark: SparkSession = SparkSession.builder().appName("generationobj").master("local[*]").config("spark.sql.crossJoin.enabled", value = true).getOrCreate()
val sc: SparkContext = spark.sparkContext
import spark.implicits._
val atomData = spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("Resources/atom.csv")
val moleculeData = spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("Resources/molecule.csv")
val df = moleculeData.join(atomData,"molecule_id")
val molecule_df = moleculeData
val mid: List[Row] = molecule_df.select("molecule_id").collect.toList
var listofmoleculeid: List[String] = mid.map(r => r.getString(0))
// print(listofmoleculeid)
newDF.createTempView("table")
newDF.show()}

以下是构建文件

name := "ExtendedTable"
version := "0.1"
scalaVersion := "2.11.12"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.0"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.3.0"
libraryDependencies += "org.apache.spark" %% "spark-mllib" % "2.3.0"
mainClass := Some("extendedtable.mainObject")
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs @ _*) => MergeStrategy.discard
case x => MergeStrategy.first
}

像下面这样更改assemblyMergeStrategy,然后构建jar文件。

您需要将此org.apache.spark.sql.sources.DataSourceRegister文件包含在jar文件中,并且此文件将在jar文件中可用spark-sql

路径是 -spark-sql_2.11-<version>.jar /META-INF/services/org.apache.spark.sql.sources.DataSourceRegister

此文件包含以下列表

org.apache.spark.sql.execution.datasources.csv.CSVFileFormat
org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider
org.apache.spark.sql.execution.datasources.json.JsonFileFormat
org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat
org.apache.spark.sql.execution.datasources.text.TextFileFormat
org.apache.spark.sql.execution.streaming.ConsoleSinkProvider
org.apache.spark.sql.execution.streaming.TextSocketSourceProvider
org.apache.spark.sql.execution.streaming.RateSourceProvider
assemblyMergeStrategy in assembly := {
case PathList("META-INF","services",xs @ _*) => MergeStrategy.filterDistinctLines // Added this 
case PathList("META-INF",xs @ _*) => MergeStrategy.discard  
case _ => MergeStrategy.first
}

使用spark-submit命令提交 Spark 作业。

# Run application locally on 4 cores
./bin/spark-submit 
--class extendedtable.mainObject 
--master local[4] 
/path/to/<your-jar>.jar 

参考- 火花文档

相关内容

最新更新