我不断收到错误:value toDF不是org.apache.spark.rdd.RDD的成员



我写了"import sqlContext.implicits._";但是它仍然不起作用。它就在火花壳中。为什么在这种情况下不对?我已经看到了许多其他将rdd转换为数据帧的方法,但是我的大部分代码都是写成toDF()的。如何使toDF工作?错误:

import org.apache.spark.ml.evaluation.RegressionEvaluator
import org.apache.spark.ml.recommendation.ALS
import org.apache.spark.ml.tuning.{ParamGridBuilder, CrossValidator}
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types.DoubleType
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
import sys.process._
"rm -f ./ml-1m.zip".!
"wget http://files.grouplens.org/datasets/movielens/ml-1m.zip".!
"ls ./ml-1m.zip".!
"rm -r ./ml-1m".!
"unzip ml-1m.zip".!
"ls ./ml-1m".!
val ratings_raw = sc.textFile("./ml-1m/ratings.dat")
ratings_raw.takeSample(false,10, seed=0).foreach(println)
case class Rating(userId: Int, movieId: Int, rating: Float)
val ratings = ratings_raw.map(x => x.split("::")).map(r => Rating(r(0).toInt, r(1).toInt, r(2).toFloat)).toDF().na.drop()

如果你在Spark-shell中,你不需要创建新的SQLContext

val sqlContext = new org.apache.spark.sql.SQLContext(sc)

你可以直接使用Spark


scala> import spark.implicits._
scala> val ratings_raw = sc.textFile("./ml-1m/ratings.dat")
ratings_raw: org.apache.spark.rdd.RDD[String] = ./ml-1m/ratings.dat MapPartitionsRDD[1] at textFile at <console>:38
scala> case class Rating(userId: Int, movieId: Int, rating: Float)
defined class Rating
scala> val ratings = ratings_raw.map(x => x.split("::")).map(r => Rating(r(0).toInt, r(1).toInt, r(2).toFloat)).toDF().na.drop()
ratings: org.apache.spark.sql.DataFrame = [userId: int, movieId: int ... 1 more field]
scala> ratings
res3: org.apache.spark.sql.DataFrame = [userId: int, movieId: int ... 1 more field]
scala> ratings.printSchema
root
 |-- userId: integer (nullable = false)
 |-- movieId: integer (nullable = false)
 |-- rating: float (nullable = false)

我试过你的代码,它工作正常!

但是,我使用了如下所示的火花会话

val spark = SparkSession.builder
            .master("local")
            .appName("test1")
            .getOrCreate()

而不是弃用

val sqlContext = new org.apache.spark.sql.SQLContext(sc)

最新更新