为什么 getInt 在 RDD[Row].map 中给出"error: value getInt is not a member of Any"?



我是Scala-Spark的新手,但我需要用这个来开发我的最终项目学士学位。

我正在尝试从数据中构建K均值算法。 数据来自kaggle:https://www.kaggle.com/murderaccountability/homicide-reports

我读取了包含数据的文件。 创建一个案例类,如下所示:

case class CrimeReport (Record_ID: String, Agency_Name: String, 
City: String, State: String, Year: Int, Month: Int, Crime_Type: String, 
Crime_Solved: String, Victim_Sex: String, Victim_Age: Int, Victim_Race: String, 
Perpetrator_Sex: String, Perpetrator_Age: String, Perpetrator_Race: String, Relationship: String, Victim_Count: String)

我将数据与案例类映射。例如,月份是字符串,我需要 Int(在我的特征向量之后创建),我定义了一个函数来解析它:

//Parsear Month:    String  ===>    Int
def parseMonthToNumber(month: String) : Int = {
var result = 0
month match {
case "January" => result = 1
case "February" => result = 2
case "March" => result = 3
case "April" => result = 4
case "May" => result = 5
case "June" => result = 6
case "July" => result = 7
case "August" => result = 8
case "September" => result = 9
case "October" => result = 10
case "November" => result = 11
case _ => result = 12
}
result
}
data = sc.textFile (... .csv)
val data_split = data.map(line => line.split(","))
val allData = data_split.map(p => CrimeReport(p(0).toString,
p(1).toString, p(2).toString, p(3).toString, parseInt(p(4)),
parseMonthToNumber(p(5)), p(6).toString, p(7).toString, p(8).toString,
parseInt(p(9)), p(10).toString, p(11).toString, p(12).toString,
p(13).toString, p(14).toString, p(15).toString))
//DataFrame
val allDF = allData.toDF()
//convert data to RDD which will be passed to KMeans
val rowsRDD = allDF.rdd.map( x => 
(x(0).getString, x.getString(1), x.getString(2), x.getString(3), x(4).getInt, x(5).getInt, x.getString(6), x.getString(7), x.getString(8), x(9).getInt, x.getString(10), x.getString(11), x.getString(12), x.getString(13), x.getString(14), x.getString(15))
)

但是我收到此错误:

error: value getInt is not a member of Any
(x(0).getString, x.getString(1), x.getString(2), x.getString(3), x(4).getInt, x(5).getInt, x.getString(6), x.getString(7), x.getString(8), x(9).getInt, x.getString(10), x.getString(11), x.getString(12), x.getString(13), x.getString(14), x.getString(15))
                                    ^

为什么?

我假设最新版本的Spark 2.1.1。

让我首先问你一个问题,为什么你将数据帧转换为RDD[Row]来执行KMeans,因为Spark中有基于数据帧的KMeans实现。

阅读Spark MLlib中的KMeans。

我不会这样做,因为Spark MLlib的基于RDD的API已被弃用:

本页记录了基于 RDD 的 API(spark.mllib包)的 MLlib 指南的各个部分。请参阅 MLlib 主指南,了解基于 DataFrame 的 API(spark.ml包),它现在是 MLlib 的主要 API。


话虽如此,让我们看看您面临什么问题。

如果我是你(并且无视坚持使用Spark MLlib基于DataFrame的API的建议),我会执行以下操作:

// val allDF = allData.toDF()
val allDF = allData.toDS

有了上述内容,您将拥有一个比纯Row更愉快的Dataset[CrimeReport]

完成转换后,您可以执行

val rowsRDD = allDF.rdd.map { x => ... }

x属于您的类型CrimeReport,并且确信您会知道如何处理它。


要直接回答您的问题,错误的原因:

错误:值 getInt 不是任何的成员

x(5)(和其他)属于Any类型,因此您必须将其转换为您的类型或仅将x(5)替换为x.getInt(5).

参见 Row 的 scaladoc。

你应该将你想要在方法Vector.dense中使用的属性定义为int/double

之后,当您将 case 类与文件中的数据映射时,您应该调用之前定义的函数。正如你在这里看到的:

val data_split = data.map(line => line.split(","))
val allData = data_split.map(p => 
CrimeReport(p(0).toString, p(1).toString, p(2).toString, p(3).toString, parseInt(p(4)), parseMonthToNumber(p(5)), p(6).toString, parseSolved(p(7)), parseSex(p(8)), parseInt(p(9)), parseRaceToNumber(p(10)), p(11).toString, p(12).toString, p(13).toString, p(14).toString, p(15).toString))

函数是:

//Filter and Cleaning data      =>    Crime Solved
def parseSolved (solved: String): Int = {
var result = 0
solved match {
case "Yes" => result = 1
case _ => result = 0
}
result
}

或:

//Parsear   Victim_Race:    String  ===>    Int
def parseRaceToNumber (crType : String) : Int = {
var result = 0
val race = crType.split("/")
race(0) match {
case "White" => result = 1
case "Black" => result = 2
case "Asian" => result = 3
case "Native American" => result = 4
case _ => result = 0
}
result
}

当我们在案例类中处理字符串数据类型而不是双精度时,我们如何使用 kmeans?我拥有的这段代码将无法工作,因为矢量需要双精度。

// Passing in Crime_Type, Crime_Solved, Perpetrator_Race to KMeans as 
the attributes we want to use to assign the instance to a cluster.
val vectors = allDF.rdd.map(r => Vectors.dense( r.Crime_Type, r.Crime_Solved, r.Perpetrator_Race ))
//KMeans model with 2 clusters and 10 iterations
val kMeansModel = KMeans.train(vectors, 2, 10)

相关内容

最新更新