我正在使用此代码将数据从Hive获取到Spark:
val hc = new org.apache.spark.sql.hive.HiveContext(sc)
val MyTab = hc.sql("select * from svm_file")
然后我得到DataFrame:
scala> MyTab.show()
+--------------------+
| line|
+--------------------+
|0 2072:1 8594:1 7...|
|0 8609:3 101617:1...|
| 0 7745:2|
|0 6696:2 9568:21 ...|
|0 200076:1 200065...|
|0 400026:20 6936:...|
|0 7793:2 9221:7 1...|
|0 4831:1 400026:1...|
|0 400011:1 400026...|
|0 200072:1 6936:1...|
|0 200065:29 4831:...|
|1 400026:20 3632:...|
|0 400026:19 6936:...|
|0 190004:1 9041:2...|
|0 190005:1 100120...|
|0 400026:21 6936:...|
|0 190004:1 3116:3...|
|0 1590:12 8594:56...|
|0 3632:2 9240:1 4...|
|1 400011:1 400026...|
+--------------------+
only showing top 20 rows
如何将此DataFrame转换为libSVM以执行逻辑回归,如本例所示:https://altiscale.zendesk.com/hc/en-us/articles/202627136-Spark-Shell-Examples?
我想说的是,首先不要将其加载到DataFrame
中,只需使用MLUtils.loadLibSVMFile
,但如果出于某种原因,这不在选项中,则可以转换为RDD[String]
,并使用与loadLibSVMFile
相同的映射逻辑
import org.apache.spark.sql.Row
import org.apache.spark.mllib.regression.LabeledPoint
MyTab
.map{ case Row(line: String) => line }
.map(_.trim)
.filter(line => !(line.isEmpty || line.startsWith("#")))
.map { line => ??? }
只需复制并粘贴loadLibSVMFile
方法的相关部分即可代替???
sklearn.datasets.dump_svmlight_file
函数有一个名为zero_based
的参数。默认情况下为True
。这意味着指数将从0开始。Spark希望他们从1开始。所以:让它成为False
。