Spark 2.4.1无法从HDFS读取Avro文件



我有一个简单的代码块要编写,然后以 Avro 格式读取数据帧。由于 Avro lib 已经在 Spark 2.4.x 中构建,

Avro文件写入成功,文件在HDFS中生成。但是,当我读取文件时,会抛出抽象方法错误异常。谁能分享我一些光?

我通过在Zeppelin nodebook Spark解释器中添加包org.apache.spark:spark-avro_2.11:2.4.1来使用Spark内部库。

我的简单代码块:

%pyspark
test_rows = [ Row(file_name = "test-guangzhou1", topic='camera1', timestamp=1, msg="Test1"),  Row(file_name = "test-guangzhou1", topic='camera1', timestamp=2, msg="Test2"), Row(file_name = "test-guangzhou3", topic='camera3', timestamp=3, msg="Test3"), Row(file_name = "test-guangzhou1", topic='camera1', timestamp=4, msg="Test4") ]
test_df = spark.createDataFrame(test_rows)
test_df.write.format("avro")
    .mode('overwrite').save("hdfs:///tmp/bag_parser279181359_3")
loaded_df =  spark.read.format("avro").load('hdfs:///tmp/bag_parser279181359_3')
loaded_df.show()

我看到的错误消息:

Py4JJavaError: An error occurred while calling o701.collectToPython.
: java.lang.AbstractMethodError
    at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD$lzycompute(DataSourceScanExec.scala:337)
    at org.apache.spark.sql.execution.FileSourceScanExec.inputRDD(DataSourceScanExec.scala:331)
    at org.apache.spark.sql.execution.FileSourceScanExec.inputRDDs(DataSourceScanExec.scala:357)
    at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:627)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:137)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:133)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:161)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:158)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:133)
    at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:289)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:381)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3259)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3256)
    at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3373)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:79)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:144)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:74)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3367)
    at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3256)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
(<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred while calling o701.collectToPython.n', JavaObject id=o702), <traceback object at 0x7fc031b5c878>)

AbstractMethodError :

当应用程序尝试调用抽象方法时引发。通常,编译器会捕获此错误;仅当自上次编译当前正在执行的方法以来某些类的定义发生了不兼容的更改时,才会在运行时发生此错误。

AFAIK 您必须调查您使用了哪些版本来编译和运行。

这里提出了一个类似但不同的问题,与在 emr-5.28.0 上使用 spark-avro 有关。这与这个问题中讨论的原因不同(因为这个问题早在 emr-5.28.0 可用之前就被问到了(,但它足够相似,我想我会链接到我的答案,以防有人偶然发现这个问题由于相似的堆栈跟踪和类似的问题。

相关内容

  • 没有找到相关文章

最新更新