如何在 scala spark 中将句子拆分为map(case(key,value)=>...)中的单词


val sparkConf = new SparkConf().setAppName("ShortTwitterAnalysis").setMaster("local[2]")
val sc = new SparkContext(sparkConf)
val text = sc.textFile("/home/tobbyj/HW1_INF553/shortTwitter.txt")
val twitter = text
  .map(_.toLowerCase)
  .map(_.replace("t", ""))
  .map(_.replace(""", ""))
  .map(_.replace("n", ""))
  .map(_.replace(".", ""))
  .map(_.replaceAll("[\p{C}]", ""))
  .map(_.split("text:")(1).split(",source:")(0))
  .zipWithIndex.map(_.swap)

使用上面的代码,我得到的结果如下:

(0,a rose by any other name would smell as sweet)
(1,a rose is a rose is a rose)
(4,rt @nba2k: the battle of two young teams tough season but one will emerge victorious who will it be? lakers or 76ers? https://tco/nukkjqu2026)
(2,love is like a rose the joy of all the earth)
(5,i was going to bake a cake and listen to the football flour refund?)
(3,at christmas i no more desire a rose than wish a snow in may’s new-fangled mirth)

然而,我想要的结果是从1开始的"key"one_answers"value"分开成下面的单词,以便您理解,尽管我不确定它会像下面这样。

(1,(a, rose, by, any, other, name, would, smell, as, sweet))
(2,(a, rose, is, a, rose, is, a, rose))
...

我使用的代码是

.map{case(key, value)=>(key+1, value.split(" "))}

但是给我下面的结果

(1,[Ljava.lang.String;@1dff58b)
(2,[Ljava.lang.String;@167179a3)
(3,[Ljava.lang.String;@73e8c7d7)
(4,[Ljava.lang.String;@7bffa418)
(5,[Ljava.lang.String;@2d385beb)
(6,[Ljava.lang.String;@4f1ab87e)

有什么建议吗?在这一步之后,我将它们映射为(1,a),(1,玫瑰),(1,by)…(2,爱情),(2,玫瑰),....

您可以导入org.apache.spark.rdd.PairRDDFunctions(此处有文档说明),以便更轻松地使用键值对。

此时,您可以使用flatMapValues方法来获得您想要的;这里是一个最小的工作示例(如果您在Spark控制台中,只需从包含val tweets的行复制):

import org.apache.spark._
import org.apache.spark.rdd.PairRDDFunctions
val conf = new SparkConf().setAppName("test").setMaster("local[*]")
val sc   = new SparkContext(conf)
val tweets = sc.parallelize(Seq(
  "this is my first tweet", 
  "and this is my second", 
  "ok this is getting boring"))
val results =
  tweets.
    zipWithIndex.
    map(_.swap).
    flatMapValues(_.split(" "))
results.collect.foreach(println)

这是这几行代码的输出:

(0,this)
(0,is)
(0,my)
(0,first)
(0,tweet)
(1,and)
(1,this)
(1,is)
(1,my)
(1,second)
(2,ok)
(2,this)
(2,is)
(2,getting)
(2,boring)

如果您有兴趣查看如何使用Spark Streaming分析实时Twitter feed的小示例,您可以在这里找到。

最新更新