spark:ClassNotFoundException当运行KafkaWordCount示例时



我在CDH 5集群上使用kafka运行spark流时遇到问题,使用以下命令:

spark-submit --master yarn --deploy-mode client 
--class org.apache.spark.examples.streaming.KafkaWordCount 
/usr/lib/spark/examples/lib/spark-examples-1.6.0-cdh5.7.0-hadoop2.6.0-cdh5.7.0.jar 
zk1,zk2,zk3 group topic 1

请注意,实际作业必须在客户端模式下运行,因此需要设置部署模式。执行上述命令会导致以下异常(驾驶员侧):

Exception in thread "main" java.lang.NoClassDefFoundError: kafka/serializer/StringDecoder
at org.apache.spark.streaming.kafka.KafkaUtils$.createStream(KafkaUtils.scala:66)
at org.apache.spark.examples.streaming.KafkaWordCount$.main(KafkaWordCount.scala:57)
at org.apache.spark.examples.streaming.KafkaWordCount.main(KafkaWordCount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: kafka.serializer.StringDecoder
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 12 more

kafka.serializer.StringDecoder当然存在于spark-examples jar中。将jar放在hadoop类路径上也解决了这个问题,但我正在寻找更好(更容易维护)的解决方案,或者至少解释一下为什么作业找不到包含在同一个jar中的类:)

有什么想法吗?谢谢

一些附加信息:

  • 其他火花示例运行良好(例如SparkPi)
  • Hadoop版本为2.6.0-cdh5.7.0
  • Spark版本为1.6.0
  • 纱线类路径:/etc/hadoop/conf:/etc/hadoop/conf:/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/.//*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-yarn/lib/*

原来spark 1.6需要0.8.2的kafka,而我安装了0.8.1。升级后,一切都像一个符咒:)

相关内容

  • 没有找到相关文章

最新更新