如何将一个简单的数据帧转换为带有案例类的数据集Spark Scala?



我正在尝试从Spark中的示例将一个简单的数据帧转换为数据集: https://spark.apache.org/docs/latest/sql-programming-guide.html

case class Person(name: String, age: Int)    
import spark.implicits._
val path = "examples/src/main/resources/people.json"
val peopleDS = spark.read.json(path).as[Person]
peopleDS.show()

但是出现了以下问题:

Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot up cast `age` from bigint to int as it may truncate
The type path of the target object is:
- field (class: "scala.Int", name: "age")
- root class: ....

谁能帮我?

编辑 我注意到用 Long 而不是 Int 有效! 为什么?

也:

val primitiveDS = Seq(1,2,3).toDS()
val augmentedDS = primitiveDS.map(i => ("var_" + i.toString, (i + 1).toLong))
augmentedDS.show()
augmentedDS.as[Person].show()

指纹:

+-----+---+
|   _1| _2|
+-----+---+
|var_1|  2|
|var_2|  3|
|var_3|  4|
+-----+---+
Exception in thread "main"
org.apache.spark.sql.AnalysisException: cannot resolve '`name`' given input columns: [_1, _2];

谁能帮我了解这里?

如果您将 Int 更改为 Long(或 BigInt(,它可以正常工作:

case class Person(name: String, age: Long)
import spark.implicits._
val path = "examples/src/main/resources/people.json"
val peopleDS = spark.read.json(path).as[Person]
peopleDS.show()

输出:

+----+-------+
| age|   name|
+----+-------+
|null|Michael|
|  30|   Andy|
|  19| Justin|
+----+-------+

编辑: 默认情况下,Spark.read.json将数字解析为Long类型 - 这样做更安全。 您可以在使用强制转换或 udfs 后更改 col 类型。

编辑2:

要回答第二个问题,您需要正确命名列,然后才能转换为 Person:

val primitiveDS = Seq(1,2,3).toDS()
val augmentedDS = primitiveDS.map(i => ("var_" + i.toString, (i + 1).toLong)).
withColumnRenamed ("_1", "name" ).
withColumnRenamed ("_2", "age" )
augmentedDS.as[Person].show()

输出:

+-----+---+
| name|age|
+-----+---+
|var_1|  2|
|var_2|  3|
|var_3|  4|
+-----+---+

这是从案例类创建数据集的方式

case class Person(name: String, age: Long) 

将 case 类保留在具有以下代码的类之外

val primitiveDS = Seq(1,2,3).toDS()
val augmentedDS = primitiveDS.map(i => Person("var_" + i.toString, (i + 1).toLong))
augmentedDS.show()
augmentedDS.as[Person].show()

希望这有帮助

最新更新