val topics= "test"
val zkQuorum="localhost:2181"
val group="test-consumer-group"
val sparkConf = new org.apache.spark.SparkConf()
.setAppName("XXXXX")
.setMaster("local[*]")
.set("cassandra.connection.host", "127.0.0.1")
.set("cassandra.connection.port", "9042")
val ssc = new StreamingContext(sparkConf, Seconds(2))
ssc.checkpoint("checkpoint")
val topicMap = topics.split(",").map((_, numThreads.toInt)).toMap
val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2)
我像这样得到DStream(json(
[{"id":100,"firstName":"Beulah","lastName":"Fleming","gender":"female","ethnicity":"SpEd","height":167,"address":27,"createdDate":1494489672243,"lastUpdatedDate":1494489672244,"isDeleted":0},{"id":101,"firstName":"Traci","lastName":"Summers","gender":"female","ethnicity":"Frp","height":181,"address":544,"createdDate":1494510639611,"lastUpdatedDate":1494510639611,"isDeleted":0}]
通过上面的程序,我在DStream中获取了json数据。我将如何处理这些 Dstream 数据并存储到 Cassandra 或弹性搜索中?那么我将如何从DStream(json格式(检索数据并存储在Cassandra中?
您需要导入com.datastax.spark.connector._
,将流的元素转换为适当的案例类
case class Record(id: String, firstName: String, ...)
val colums = SomeColums("id", "first_name", ...)
val mapped = lines.map(whateverDataYouHave => fuctionThatReutrnsARecordObject)
并使用隐式函数保存它 saveToCassandra
mapped.saveToCassandra(KEYSPACE_NAME, TABLE_NAME, columns)
有关更多信息,请查看文档 https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md