为什么在创建自定义案例类的数据集时"Unable to find encoder for type stored in a Dataset"?



带有Scala 2.11.8的Spark 2.0(最终版本)。以下超简单代码产生编译错误Error:(17, 45) Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases.

import org.apache.spark.sql.SparkSession
case class SimpleTuple(id: Int, desc: String)
object DatasetTest {
  val dataList = List(
    SimpleTuple(5, "abc"),
    SimpleTuple(6, "bcd")
  )
  def main(args: Array[String]): Unit = {
    val sparkSession = SparkSession.builder.
      master("local")
      .appName("example")
      .getOrCreate()
    val dataset = sparkSession.createDataset(dataList)
  }
}

Spark Datasets需要Encoders用于即将存储的数据类型。对于常见类型(原子、产品类型),有许多预定义的编码器可用,但您必须首先从SparkSession.implicits导入这些编码器才能使其工作:

val sparkSession: SparkSession = ???
import sparkSession.implicits._
val dataset = sparkSession.createDataset(dataList)

或者,您可以直接提供显式

import org.apache.spark.sql.{Encoder, Encoders}
val dataset = sparkSession.createDataset(dataList)(Encoders.product[SimpleTuple])

或隐式

implicit val enc: Encoder[SimpleTuple] = Encoders.product[SimpleTuple]
val dataset = sparkSession.createDataset(dataList)

存储类型的Encoder

注意,Encoders还为原子类型提供了许多预定义的Encoders,为复杂类型提供了Encoders,可以用ExpressionEncoder导出。

进一步阅读:

  • 对于内置编码器未涵盖的自定义对象,请参阅如何在数据集中存储自定义对象
  • 对于Row对象,在尝试将数据帧行映射到更新行时,必须显式提供Encoder,如Encoder错误所示
  • 对于调试案例,必须在Main之外定义案例类https://stackoverflow.com/a/34715827/3535853

对于其他用户(您的用户是正确的),请注意,在object范围之外定义case class也很重要。因此:

失败:

object DatasetTest {
  case class SimpleTuple(id: Int, desc: String)
  val dataList = List(
    SimpleTuple(5, "abc"),
    SimpleTuple(6, "bcd")
  )
  def main(args: Array[String]): Unit = {
    val sparkSession = SparkSession.builder
      .master("local")
      .appName("example")
      .getOrCreate()
    val dataset = sparkSession.createDataset(dataList)
  }
}

添加隐含项,仍然失败并出现相同错误:

object DatasetTest {
  case class SimpleTuple(id: Int, desc: String)
  val dataList = List(
    SimpleTuple(5, "abc"),
    SimpleTuple(6, "bcd")
  )
  def main(args: Array[String]): Unit = {
    val sparkSession = SparkSession.builder
      .master("local")
      .appName("example")
      .getOrCreate()
    import sparkSession.implicits._
    val dataset = sparkSession.createDataset(dataList)
  }
}

作品:

case class SimpleTuple(id: Int, desc: String)
object DatasetTest {   
  val dataList = List(
    SimpleTuple(5, "abc"),
    SimpleTuple(6, "bcd")
  )
  def main(args: Array[String]): Unit = {
    val sparkSession = SparkSession.builder
      .master("local")
      .appName("example")
      .getOrCreate()
    import sparkSession.implicits._
    val dataset = sparkSession.createDataset(dataList)
  }
}

以下是相关的错误:https://issues.apache.org/jira/browse/SPARK-13540,所以希望它能在Spark 2的下一个版本中得到修复。

(编辑:看起来这个错误修复程序实际上是在Spark 2.0.0中……所以我不确定为什么它仍然失败)。

我想用我自己的问题的答案来澄清,如果目标是定义一个简单的文本SparkData框架,而不是使用Scala元组和隐式转换,那么更简单的方法是直接使用Sparkneneneba API,如下所示:

  import org.apache.spark.sql._
  import org.apache.spark.sql.types._
  import scala.collection.JavaConverters._
  val simpleSchema = StructType(
    StructField("a", StringType) ::
    StructField("b", IntegerType) ::
    StructField("c", IntegerType) ::
    StructField("d", IntegerType) ::
    StructField("e", IntegerType) :: Nil)
  val data = List(
    Row("001", 1, 0, 3, 4),
    Row("001", 3, 4, 1, 7),
    Row("001", null, 0, 6, 4),
    Row("003", 1, 4, 5, 7),
    Row("003", 5, 4, null, 2),
    Row("003", 4, null, 9, 2),
    Row("003", 2, 3, 0, 1)
  )
  val df = spark.createDataFrame(data.asJava, simpleSchema)

最新更新