Spark 2.x 版本是否会破坏 SQL 联接语法



当我提交复杂的连接SQL查询时,通常会给一个或两个操作数一个较短的名称来阐明我的意图,例如以下2个查询:

SELECT *
FROM transactions
JOIN accounts ON transactions.cardnumber=accounts.cardnumber

SELECT *
FROM transactions AS left
JOIN accounts ON left.cardnumber=accounts.cardnumber

应该有同样的效果。

我已经在 Spark 1.6.3 中测试了这两个查询,并且两者都有效。但是,在我迁移到 Spark 2.2.1 后,第二个查询抛出了以下错误:

org.apache.spark.sql.AnalysisException: cannot resolve '`left.cardnumber`' given input columns: [name, sku, sin, accountnumber, purchase_date, sin, cardnumber, purchase_date, cardnumber, amount, sku, name, amount]; line 4 pos 17;
'Project [*]
+- 'Join LeftOuter, ('left.cardnumber = cardnumber#77)
   :- SubqueryAlias AS
   :  +- SubqueryAlias transactions
   :     +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).cardnumber, true) AS cardnumber#53, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).name, true) AS name#54, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).amount AS amount#55, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).purchase_date, true) AS purchase_date#56, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).sin, true) AS sin#57, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).sku, true) AS sku#58]
   :        +- ExternalRDD [obj#52]
   +- SubqueryAlias accounts
      +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).accountnumber, true) AS accountnumber#76, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).cardnumber, true) AS cardnumber#77, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).name, true) AS name#78, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).amount AS amount#79, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).purchase_date, true) AS purchase_date#80, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).sin, true) AS sin#81, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).sku, true) AS sku#82]
         +- ExternalRDD [obj#75]
    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:88)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:85)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:268)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:268)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:279)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:289)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:290)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$6.apply(QueryPlan.scala:298)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:298)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:268)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:85)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:78)
    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:126)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:78)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:91)
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:52)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632)
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:691)

此故障的原因是什么,我该如何解决?

问题是您使用保留关键字 ( LEFT ( 作为别名,并且作为结果查询被解释为:

SELECT * 
FROM transactions AS ``
LEFT JOIN  accounts ON left.cardnumber = accounts.cardnumber

具有空别名。实际上以下查询:

SELECT * 
FROM transactions AS ``
LEFT JOIN  accounts ON ``.cardnumber = accounts.cardnumber 

虽然不完全等效,但工作得很好。这是标准的 SQL 行为,而不是错误。

选择不同的名称,一切都会正常工作:

Seq[Int]().toDF("cardnumber").createOrReplaceTempView("accounts")
Seq[Int]().toDF("cardnumber").createOrReplaceTempView("transactions")
spark.sql("""SELECT *
             FROM transactions AS l
             JOIN accounts AS r
             ON l.cardnumber = r.cardnumber""")

引用别名也可以:

spark.sql("""SELECT *
             FROM transactions AS `left`
             JOIN accounts AS r
             ON left.cardnumber = r.cardnumber""")

相关内容

  • 没有找到相关文章

最新更新