Scala-How to filter an RDD org.apache.spark.rdd.RDD[String]]



我有一个RDD,需要按价格过滤。这是RDD

id      category_id       product_name                               price   
1       2            Quest Q64 10 FT. x 10 FT. Slant Leg Instant U   59.98
2       2            Under Armour Men's Highlight MC Football Clea   129.99
3       2            Under Armour Men's Renegade D Mid Football Cl   89.99
4       2            Under Armour Men's Renegade D Mid Football Cl   89.99
5       2            Riddell Youth Revolution Speed Custom Footbal   199.99
6       2            Jordan Men's VI Retro TD Football Cleat         134.99  
7       2            Schutt Youth Recruit Hybrid Custom Football H   99.99
8       2            Nike Men's Vapor Carbon Elite TD Football Cle   129.99
9       2            Nike Adult Vapor Jet 3.0 Receiver Gloves        50.0

我正在以下错误

scala> val rdd2 = rdd1.map(.split("t")).map(c => c(3) < 100) 
<console>:44: error: type mismatch; found : Int(100) required: String val rdd2 = rdd1.map(.split("t")).map(c => c(3) < 100)
df.printSchema()
root |-- id: integer (nullable = true) 
     |-- category_id: integer (nullable = true) 
     |-- product_name: string (nullable = true) 
     |-- price: double (nullable = true) 
     |-- image: string (nullable = true)

您可以简单地使用sparkContext.textfile读取文件,然后进行以下计算

val rdd1 = sparkSession.sparkContext.textFile("text file location")
val rdd2 = rdd1.map(_.split("t")).filter(c => !"price".equalsIgnoreCase(c(3).trim)).filter(c => c(3).toDouble < 100)

如果您已经有dataframe,则无需将它们转换回rdd进行计算。您只能在dataframe上的filter本身

val finaldf = df.filter($"price" =!= "price").filter($"price".cast(DoubleType) < 100)

给定df.printSchema(),您可以使用列上的查询过滤表 price

df.filter(df.col("price") < 100).show

最新更新