我正在使用Spark MLlib PrefixSpan算法。我有一些代码在Spark 1.6中工作,但我们最近转移到Spark 2.2。
我有这样的数据帧
viewsPurchasesGrouped: org.apache.spark.sql.DataFrame = [session_id: decimal(29,0), view_product_ids: array<bigint> ... 1 more field]
root
|-- session_id: decimal(29,0) (nullable = true)
|-- view_product_ids: array (nullable = true)
| |-- element: long (containsNull = true)
|-- purchase_product_ids: array (nullable = true)
| |-- element: long (containsNull = true)
在 Spark 1.6 中,我使用这段代码将其转换为适当的数据帧以供 MLlib 使用:
import scala.collection.mutable.WrappedArray
val viewsPurchasesRddString = viewsPurchasesGrouped.map( row =>
Array(
Array(row.getAs[WrappedArray[String]](1).toArray),
Array(row.getAs[WrappedArray[String]](2).toArray)
)
)
自从我们的切换以来,这不再起作用了。
我试过这个:
val viewsPurchasesRddString2 = viewsPurchasesGrouped.select("view_product_ids","purchase_product_ids").rdd.map( row =>
Array(
row.getSeq[Long](0).toArray,
row.getSeq[Long](1).toArray
)
)
并看到此令人困惑的错误消息,这意味着它花了session_id和purchase_product_ids,而不是view_product_ids和purchase_product_ids原始数据帧。
Job aborted due to stage failure: [...] scala.MatchError: [14545234113341303814564569524,WrappedArray(123, 234, 456, 678, 789)]
我也试过这个:
val viewsPurchasesRddString = viewsPurchasesGrouped.map {
case Row(session_id: Long, view_product_ids: Array[Long], purchase_product_ids: Array[Long]) =>
(view_product_ids, purchase_product_ids)
}
失败
viewsPurchasesRddString: org.apache.spark.sql.Dataset[(Array[Long], Array[Long])] = [_1: array<bigint>, _2: array<bigint>]
prefixSpan: org.apache.spark.mllib.fpm.PrefixSpan = org.apache.spark.mllib.fpm.PrefixSpan@10d69876
<console>:67: error: overloaded method value run with alternatives:
[Item, Itemset <: Iterable[Item], Sequence <: Iterable[Itemset]](data: org.apache.spark.api.java.JavaRDD[Sequence])org.apache.spark.mllib.fpm.PrefixSpanModel[Item] <and>
[Item](data: org.apache.spark.rdd.RDD[Array[Array[Item]]])(implicit evidence$1:
scala.reflect.ClassTag[Item])org.apache.spark.mllib.fpm.PrefixSpanModel[Item] cannot be applied to (org.apache.spark.sql.Dataset[(Array[Long], Array[Long])])
val model = prefixSpan.run(viewsPurchasesRddString)
^
如何正确移植我的代码?
您的数据帧表明这些列的类型为 array<string>
,因此您不应使用 Seq[Long]
访问这些列。在 Spark 1.6 中,数据帧上的map
会自动切换到 RDD API,而在 Spark 2 中,您需要使用 rdd.map
来执行相同的操作。所以我建议这应该有效:
val viewsPurchasesRddString = viewsPurchasesGrouped.rdd.map( row =>
Array(
Array(row.getAs[WrappedArray[String]](1).toArray),
Array(row.getAs[WrappedArray[String]](2).toArray)
)
)