没有架构的Row上的java.lang.UnsupportedOperationExceptionfieldIndex未



以下代码正在引发由以下原因引起的异常:java.lang.UnsupportedOperationException:没有架构的行上的fieldIndex未定义。当使用ExpressionEncoder、groupedByKey和flatMap对数据帧进行groupByKey和flatMap调用后返回的数据帧上的被调用时,就会发生这种情况。

逻辑流:originalDf->groupByKey->flatMap->groupByKey->flatMap->显示

import org.apache.spark.sql.catalyst.encoders.RowEncoder
import org.apache.spark.sql.{Row, SparkSession}
import org.apache.spark.sql.types.{ IntegerType, StructField, StructType}
import scala.collection.mutable.ListBuffer

object Test {
def main(args: Array[String]): Unit = {
val values = List(List("1", "One") ,List("1", "Two") ,List("2", "Three"),List("2","4")).map(x =>(x(0), x(1)))
val session = SparkSession.builder.config("spark.master", "local").getOrCreate
import session.implicits._
val dataFrame = values.toDF

dataFrame.show()
dataFrame.printSchema()
val newSchema = StructType(dataFrame.schema.fields
++ Array(
StructField("Count", IntegerType, false)
)
)
val expr = RowEncoder.apply(newSchema)
val tranform =  dataFrame.groupByKey(row => row.getAs[String]("_1")).flatMapGroups((key, inputItr) => {
val inputSeq = inputItr.toSeq
val length = inputSeq.size
var listBuff = new ListBuffer[Row]()
var counter : Int= 0
for(i <- 0 until(length))
{
counter+=1
}
for(i <- 0 until length ) {
var x = inputSeq(i)
listBuff += Row.fromSeq(x.toSeq ++ Array[Int](counter))
}
listBuff.iterator
})(expr)
tranform.show
val newSchema1 = StructType(tranform.schema.fields
++ Array(
StructField("Count1", IntegerType, false)
)
)
val expr1 = RowEncoder.apply(newSchema1)
val tranform2 =  tranform.groupByKey(row => row.getAs[String]("_1")).flatMapGroups((key, inputItr) => {
val inputSeq = inputItr.toSeq
val length = inputSeq.size
var listBuff = new ListBuffer[Row]()
var counter : Int= 0
for(i <- 0 until(length))
{
counter+=1
}
for(i <- 0 until length ) {
var x = inputSeq(i)
listBuff += Row.fromSeq(x.toSeq ++ Array[Int](counter))
}
listBuff.iterator
})(expr1)
tranform2.show
}
}

以下是堆栈跟踪

18/11/21 19:39:03 WARN TaskSetManager: Lost task 144.0 in stage 11.0 (TID 400, localhost, executor driver): java.lang.UnsupportedOperationException: fieldIndex on a Row without schema is undefined.
at org.apache.spark.sql.Row$class.fieldIndex(Row.scala:342)
at org.apache.spark.sql.catalyst.expressions.GenericRow.fieldIndex(rows.scala:166)
at org.apache.spark.sql.Row$class.getAs(Row.scala:333)
at org.apache.spark.sql.catalyst.expressions.GenericRow.getAs(rows.scala:166)
at com.quantuting.sparkutils.main.Test$$anonfun$4.apply(Test.scala:59)
at com.quantuting.sparkutils.main.Test$$anonfun$4.apply(Test.scala:59)
at org.apache.spark.sql.execution.AppendColumnsWithObjectExec$$anonfun$9$$anonfun$apply$3.apply(objects.scala:300)
at org.apache.spark.sql.execution.AppendColumnsWithObjectExec$$anonfun$9$$anonfun$apply$3.apply(objects.scala:298)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:149)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

如何修复此代码?

报告的问题可以通过替换fieldname版本的getAs[T]方法(用于groupByKey的函数(来避免:

groupByKey(row => row.getAs[String]("_1"))

使用field-position版本:

groupByKey(row => row.getAs[String](fieldIndexMap("_1")))

其中fieldIndexMap将字段名称映射到它们对应的字段索引:

val fieldIndexMap = tranform.schema.fieldNames.zipWithIndex.toMap

附带说明一下,您的flatMapGroups功能可以简化为以下内容:

val tranform2 = tranform.groupByKey(_.getAs[String](fieldIndexMap("_1"))).
flatMapGroups((key, inputItr) => {
val inputSeq = inputItr.toSeq
val length = inputSeq.size
inputSeq.map(r => Row.fromSeq(r.toSeq :+ length))
})(expr1)

将原始groupByKey/flatMapGroups方法应用于"dataFrame"与"tranform"之间的不一致行为显然与这些方法如何处理DataFrameDataset[Row]有关。

从JIRA收到的关于Spark项目的解决方案:https://issues.apache.org/jira/browse/SPARK-26436

这个问题是由您如何创建行引起的:

listBuff += Row.fromSeq(x.toSeq ++ Array[Int](counter))

Row.fromSeq创建了一个GenericRow,由于GenericRow没有架构,因此未实现GenericRow的fieldIndex。

更改行以创建GenericRowWithSchema可以解决此问题:

listBuff += new GenericRowWithSchema((x.toSeq ++ Array[Int](counter)).toArray, newSchema)

相关内容

最新更新