在 scala Spark 中从向量列转换为 Double[Array] 列



我有一个数据框 doubleSeq,其结构如下

res274: org.apache.spark.sql.DataFrame = [finalFeatures: vector]

该列的第一条记录如下

res281: org.apache.spark.sql.Row = [[3.0,6.0,-0.7876947819954485,-0.21757635218517163,0.9731844373162398,-0.6641741696340383,-0.6860072219935377,-0.2990737363481845,-0.7075863760365155,0.8188108975549018,-0.8468559840943759,-0.04349947247406488,-0.45236764452589984,1.0333959313820456,0.6097566070878347,-0.7106619551471779,-0.7750330808435969,-0.08097610412658443,-0.45338437108038904,-0.2952869863393396,-0.30959772365257004,0.6988768123463287,0.17049117199049213,3.2674649019757385,-0.8333373234944124,1.8462942520757128,-0.49441222531240125,-0.44187299748074166,-0.300810826687287]]

我想提取双数组

[3.0,6.0,-0.7876947819954485,-0.21757635218517163,0.9731844373162398,-0.6641741696340383,-0.6860072219935377,-0.2990737363481845,-0.7075863760365155,0.8188108975549018,-0.8468559840943759,-0.04349947247406488,-0.45236764452589984,1.0333959313820456,0.6097566070878347,-0.7106619551471779,-0.7750330808435969,-0.08097610412658443,-0.45338437108038904,-0.2952869863393396,-0.30959772365257004,0.6988768123463287,0.17049117199049213,3.2674649019757385,-0.8333373234944124,1.8462942520757128,-0.49441222531240125,-0.44187299748074166,-0.300810826687287]

由此——

doubleSeq.head(1)(0)(0)

Any = [3.0,6.0,-0.7876947819954485,-0.21757635218517163,0.9731844373162398,-0.6641741696340383,-0.6860072219935377,-0.2990737363481845,-0.7075863760365155,0.8188108975549018,-0.8468559840943759,-0.04349947247406488,-0.45236764452589984,1.0333959313820456,0.6097566070878347,-0.7106619551471779,-0.7750330808435969,-0.08097610412658443,-0.45338437108038904,-0.2952869863393396,-0.30959772365257004,0.6988768123463287,0.17049117199049213,3.2674649019757385,-0.8333373234944124,1.8462942520757128,-0.49441222531240125,-0.44187299748074166,-0.300810826687287]

这不能解决我的问题

Scala Spark - 在 Spark DataFrame 中将向量列拆分为单独的列

没有解决我的问题,但它是一个指标

所以你想从行中提取一个向量,并将其转换为一个双精度数组。

代码的问题在于 get 方法(以及您正在使用的隐式 apply 方法(返回类型为 Any 的对象。事实上,Row是一个通用的、非参数化的对象,现在无法在编译时知道它包含什么类型。这有点像 Java 1.4 及更早版本中的列表。要在 Spark 中解决它,您可以使用 getAs 方法,您可以使用您选择的类型进行参数化。

在您的情况下,您似乎有一个包含向量 ( org.apache.spark.ml.linalg.Vector 的数据帧。

import org.apache.spark.ml.linalg._
val firstRow = df.head(1)(0) // or simply df.head
val vect : Vector = firstRow.getAs[Vector](0)
// or all in one: df.head.getAs[Vector](0)
// to transform into a regular array
val array : Array[Double] = vect.toArray

另请注意,您可以按名称访问列,如下所示:

val vect : Vector = firstRow.getAs[Vector]("finalFeatures")

最新更新