在Spark 2.0.1 DataFrame上执行内部连接时出错



还有谁遇到这个问题,并有关于如何解决它的想法吗?

我一直在尝试更新我的代码,以使用Spark 2.0.1和Scala 2.11。在Spark 1.6.0和Scala 2.10中,一切都很顺利。我有一个直接的数据框架到数据框架的内部连接,它返回一个错误。数据来自AWS RDS Aurora。请注意,下面的foo数据框实际上有92列,而不是我所展示的两列。即使只有两列,这个问题仍然存在。

相关信息:

DataFrame 1 with schema

foo.show()
+--------------------+------+
|      Transaction ID|   BIN|
+--------------------+------+
|               bbBW0|134769|
|               CyX50|173622|
+--------------------+------+
println(foo.printSchema())
root
|-- Transaction ID: string (nullable = true)
|-- BIN: string (nullable = true)

DataFrame 2 with schema

bar.show()
+--------------------+-----------------+-------------------+
|              TranId|       Amount_USD|     Currency_Alpha|
+--------------------+-----------------+-------------------+
|               bbBW0|            10.99|                USD|
|               CyX50|           438.53|                USD|
+--------------------+-----------------+-------------------+
println(bar.printSchema())
root
|-- TranId: string (nullable = true)
|-- Amount_USD: string (nullable = true)
|-- Currency_Alpha: string (nullable = true)

使用explain连接数据框架

val asdf = foo.join(bar, foo("Transaction ID") === bar("TranId"))
println(foo.join(bar, foo("Transaction ID") === bar("TranId")).explain())
== Physical Plan ==
*BroadcastHashJoin [Transaction ID#0], [TranId#202], Inner, BuildRight
:- *Scan JDBCRelation((SELECT
        ...
        I REMOVED A BUNCH OF LINES FROM THIS PRINT OUT
        ...
      ) as x) [Transaction ID#0,BIN#8] PushedFilters: [IsNotNull(Transaction ID)], ReadSchema: struct<Transaction ID:string,BIN:string>
+- BroadcastExchange HashedRelationBroadcastMode(List(input[0, string, false]))
   +- *Filter isnotnull(TranId#202)
      +- InMemoryTableScan [TranId#202, Amount_USD#203, Currency_Alpha#204], [isnotnull(TranId#202)]
         :  +- InMemoryRelation [TranId#202, Amount_USD#203, Currency_Alpha#204], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
         :     :  +- Scan ExistingRDD[TranId#202,Amount_USD#203,Currency_Alpha#204]

现在我得到的错误是:

16/10/18 11:36:50 ERROR Executor: Exception in task 0.0 in stage 6.0 (TID 6)
java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ID IS NOT NULL)' at line 54

完整的堆栈可以在这里看到(http://pastebin.com/C9bg2HFt)

在我的代码或jdbc查询中,从数据库中提取数据,我没有ID IS NOT NULL)。我花了很多时间在google上搜索,找到了一个Spark的提交,它在连接的查询计划中添加了空过滤器。下面是提交(https://git1-us-west.apache.org/repos/asf?p=spark.git;a=commit;h=ef770031)

想知道你是否尝试过以下方法;

val dfRenamed = bar.withColumnRenamed("TranId", " Transaction ID")
val newDF = foo.join(dfRenamed, Seq("Transaction ID"), "inner")

相关内容

  • 没有找到相关文章

最新更新