我发现在Spark Streaming(Spark 2.0)中有两种消费Kafka主题的方法:
1) 使用KafkaUtils.createDirectStream
每k秒获取一次数据流,请参阅此文档
2) 使用kafka: sqlContext.read.format(“json”).stream(“kafka://KAFKA_HOST”)
为Spark 2.0的新功能创建一个无限的DataFrame:结构化流,相关文档在这里
方法1)有效,但2)无效,我得到了以下错误
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.spark.sql.DataFrameReader.stream(Ljava/lang/String;)Lorg/apache/spark/sql/Dataset;
...
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
我的问题是:“kafka://KAFKA_HOST”
指的是什么
我应该如何解决这个问题?
提前谢谢!
Spark 2.0还不支持Kafka作为无限数据帧/集的来源。计划在2.1 中添加支持
编辑:(6.12.2016)
Kafka 0.10现已在Spark 2.0.2:中得到正式支持
val ds1 = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
.option("subscribe", "topic1")
.load()
ds1
.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
.as[(String, String)]