向DataFrame添加Column的问题



下面的代码出现AnalysisException失败:sc.version String = 1.6.0

case class Person(name: String, age: Long)
val caseClassDF = Seq(Person("Andy", 32)).toDF()
caseClassDF.count()
val seq = Seq(1)
val rdd = sqlContext.sparkContext.parallelize(seq)
val df2 = rdd.toDF("Counts")
df2.count()
val withCounts = caseClassDF.withColumn("duration", df2("Counts"))

由于某种原因,它可以与UDF一起工作:

import org.apache.spark.sql.functions.udf
case class Person(name: String, age: Long, day: Int)
val caseClassDF = Seq(Person("Andy", 32, 1), Person("Raman", 22, 1), Person("Rajan", 40, 1), Person("Andy", 42, 2), Person("Raman", 42, 2), Person("Rajan", 50, 2)).toDF()
val calculateCounts= udf((x: Long, y: Int) => 
  x+y)
val df1 = caseClassDF.withColumn("Counts", calculateCounts($"age", $"day"))
df1.show
+-----+---+---+------+
| name|age|day|Counts|
+-----+---+---+------+
| Andy| 32|  1|    33|
|Raman| 22|  1|    23|
|Rajan| 40|  1|    41|
| Andy| 42|  2|    44|
|Raman| 42|  2|    44|
|Rajan| 50|  2|    52|
+-----+---+---+------+

casecclassdf。withColumn("duration", df2("Counts")),这里列应该是相同的数据帧(在您的情况下caseClassDF)。答:Spark不允许在withColumn中使用不同DataFrame的列。

PS:我是Spark 1.6的用户。不确定Spark 2.x中是否有

相关内容

  • 没有找到相关文章

最新更新