java.lang.NoSuchMethodError: scala.reflect.api.JavaUniverse.


java.lang.NoSuchMethodError: scala.reflect.api.JavaUniverse.runtimeMirror(Ljava/lang/ClassLoader;)Lscala/reflect/api/JavaMirrors$JavaMirror;
    at org.elasticsearch.spark.serialization.ReflectionUtils$.org$elasticsearch$spark$serialization$ReflectionUtils$$checkCaseClass(ReflectionUtils.scala:42)
    at org.elasticsearch.spark.serialization.ReflectionUtils$$anonfun$checkCaseClassCache$1.apply(ReflectionUtils.scala:84)

似乎是Scala版本不兼容,但是我看到Spark,Spark 2.10和Scala 2.11.8的文档还可以。

那是我的pom.xml,这只是对Spark用Es-Hadoop写入Elasticsearch的测试,我不知道如何解决此例外。

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>cn.jhTian</groupId>
    <artifactId>sparkLink</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>jar</packaging>
    <name>${project.artifactId}</name>
    <description>My wonderfull scala app</description>
    <inceptionYear>2015</inceptionYear>
    <licenses>
        <license>
            <name>My License</name>
            <url>http://....</url>
            <distribution>repo</distribution>
        </license>
    </licenses>
    <properties>
        <encoding>UTF-8</encoding>
        <scala.version>2.11.8</scala.version>
        <scala.compat.version>2.11</scala.compat.version>
    </properties>
    <repositories>
        <repository>
            <id>ainemo</id>
            <name>xylink</name>
            <url>http://10.170.209.180:8081/nexus/content/groups/public/</url>
        </repository>
    </repositories>
    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.1.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.6.4</version><!-- 2.64 -->
        </dependency>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>${scala.version}</version>
        </dependency>
        <!--<dependency>-->
            <!--<groupId>org.scala-lang</groupId>-->
            <!--<artifactId>scala-compiler</artifactId>-->
            <!--<version>${scala.version}</version>-->
        <!--</dependency>-->
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-reflect</artifactId>
            <version>${scala.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>2.6.4</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming_2.11</artifactId>
            <version>2.1.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
            <version>2.1.0</version>
        </dependency>
        <dependency>
            <groupId>com.google.protobuf</groupId>
            <artifactId>protobuf-java</artifactId>
            <version>3.1.0</version>
        </dependency>
        <dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch-hadoop</artifactId>
            <version>5.3.0 </version>
        </dependency>
        <!-- Test -->
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.10</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.specs2</groupId>
            <artifactId>specs2-core_${scala.compat.version}</artifactId>
            <version>2.4.16</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.scalatest</groupId>
            <artifactId>scalatest_${scala.compat.version}</artifactId>
            <version>2.2.4</version>
            <scope>test</scope>
        </dependency>
    </dependencies>
</project>'

这是我的代码

import org.apache.spark.{SparkConf, SparkContext}
import org.elasticsearch.spark._
/**
  * Created by jhTian on 2017/4/19.
  */
object EsWrite {
  def main(args: Array[String]) {
    val sparkConf = new SparkConf()
      .set("es.nodes", "1.1.1.1")
      .set("es.port", "9200")
      .set("es.index.auto.create", "true")
      .setAppName("es-spark-demo")
    val sc = new SparkContext(sparkConf)
    val job1 = Job("C开发工程师","http://job.c.com","c公司","10000")
    val job2 = Job("C++开发工程师","http://job.c++.com","c++公司","10000")
    val job3 = Job("C#开发工程师","http://job.c#.com","c#公司","10000")
    val job4 = Job("Java开发工程师","http://job.java.com","java公司","10000")
    val job5 = Job("Scala开发工程师","http://job.scala.com","java公司","10000")
//    val numbers = Map("one" -> 1, "two" -> 2, "three" -> 3)
//    val airports = Map("arrival" -> "Otopeni", "SFO" -> "San Fran")
//    val rdd=sc.makeRDD(Seq(numbers,airports))
    val rdd=sc.makeRDD(Seq(job1,job2,job3,job4,job5))
    rdd.saveToEs("job/info")
    sc.stop()
  }
}
case class Job(jobName:String, jobUrl:String, companyName:String, salary:String)'

通常NoSuchMethodError表示呼叫者的编译与运行时的类路径上的版本不同(或者您在CP上有多个版本(。

在您的情况下,我猜es-hadoop是针对不同版本的Scala构建的,但我不久没有使用过Maven,但我认为您需要获得一些有用的命令是mvn depdencyTree。使用输出查看与Scala es-hadoop一起构建的版本,然后配置您的项目以使用相同的Scala版本。

要获得稳定/可重现的构建,我建议使用maven-enforcer-plugin之类的东西:

<plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-enforcer-plugin</artifactId>
                <version>1.4.1</version>
                <executions>
                    <execution>
                        <id>enforce</id>
                        <configuration>
                            <rules>
                                <dependencyConvergence />
                            </rules>
                        </configuration>
                        <goals>
     <goal>enforce</goal>
    </goals>
</execution>
</executions>
</plugin>

最初可能很烦人,但是一旦您对所有依赖项进行排序,就不应该再得到这样的问题了。

使用像这样的依赖性

<dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch-spark-20_2.11</artifactId>
            <version>5.2.2</version>
        </dependency>

Spark 2.0和Scala 2.11

最新更新