使用SQL表达式在Spark中删除重复的列



我认为这个问题类似于其他问题,但没有问。

在火花中,我们如何通过删除重复列运行SQL查询?

例如,在Spark

上运行的SQL查询
select a.* from a
left outer join
   select b.* from b
on a.id = b.id  

在这种情况下,如何删除重复的列B.ID?

我知道我们可以在Spark中使用其他步骤,例如提供alas或重命名列,但是仅通过编写SQL Querys即可更快地删除重复的列的方法?

我有两个dataframes,df1和df2,并将根据ID列执行加入操作。

scala> val df1  = Seq((1,"mahesh"), (2,"shivangi"),(3,"manoj")).toDF("id", "name")
df1: org.apache.spark.sql.DataFrame = [id: int, name: string]
scala> df1.show
+---+--------+
| id|    name|
+---+--------+
|  1|  mahesh|
|  2|shivangi|
|  3|   manoj|
+---+--------+
scala> val df2  = Seq((1,24), (2,23),(3,24)).toDF("id", "age")
df2: org.apache.spark.sql.DataFrame = [id: int, age: int]
scala> df2.show
+---+---+
| id|age|
+---+---+
|  1| 24|
|  2| 23|
|  3| 24|
+---+---+

这是一个不正确的解决方案,其中连接列定义为谓词。

df1("id") === df2("id")

不正确的结果是ID列在加入的数据框架中重复:

scala> df1.join(df2, df1("id") === df2("id"), "left").show
+---+--------+---+---+
| id|    name| id|age|
+---+--------+---+---+
|  1|  mahesh|  1| 24|
|  2|shivangi|  2| 23|
|  3|   manoj|  3| 24|
+---+--------+---+---+

正确的解决方案是将联接列定义为字符串seq(" id")的数组而不是表达式。然后加入DataFrame没有重复的列。

scala> df1.join(df2, Seq("id"),"left").show
+---+--------+---+
| id|    name|age|
+---+--------+---+
|  1|  mahesh| 24|
|  2|shivangi| 23|
|  3|   manoj| 24|
+---+--------+---+

有关更多信息,您可以从此处参考

自Spark 1.4.0以来,您可以以两种方式使用加入。使用以第一条方式时,联接列只会出现一次。

/**
 * Inner equi-join with another [[DataFrame]] using the given columns.
 *
 * Different from other join functions, the join columns will only appear once in the output,
 * i.e. similar to SQL's `JOIN USING` syntax.
 *
 * {{{
 *   // Joining df1 and df2 using the columns "user_id" and "user_name"
 *   df1.join(df2, Seq("user_id", "user_name"))
 * }}}
 *
 * Note that if you perform a self-join using this function without aliasing the input
 * [[DataFrame]]s, you will NOT be able to reference any columns after the join, since
 * there is no way to disambiguate which side of the join you would like to reference.
 *
 * @param right Right side of the join operation.
 * @param usingColumns Names of the columns to join on. This columns must exist on both sides.
 * @group dfops
 * @since 1.4.0
 */
def join(right: DataFrame, usingColumns: Seq[String]): DataFrame = {
  join(right, usingColumns, "inner")
}

相关内容

  • 没有找到相关文章