在 Apache Spark Join 中包含空值



我想在Apache Spark连接中包含空值。 默认情况下,Spark 不包含具有 null 的行。

下面是默认的 Spark 行为。

val numbersDf = Seq(
  ("123"),
  ("456"),
  (null),
  ("")
).toDF("numbers")
val lettersDf = Seq(
  ("123", "abc"),
  ("456", "def"),
  (null, "zzz"),
  ("", "hhh")
).toDF("numbers", "letters")
val joinedDf = numbersDf.join(lettersDf, Seq("numbers"))

这是joinedDf.show()的输出:

+-------+-------+
|numbers|letters|
+-------+-------+
|    123|    abc|
|    456|    def|
|       |    hhh|
+-------+-------+

这是我想要的输出:

+-------+-------+
|numbers|letters|
+-------+-------+
|    123|    abc|
|    456|    def|
|       |    hhh|
|   null|    zzz|
+-------+-------+

Spark 提供了一个特殊的安全相等运算符NULL

numbersDf
  .join(lettersDf, numbersDf("numbers") <=> lettersDf("numbers"))
  .drop(lettersDf("numbers"))
+-------+-------+
|numbers|letters|
+-------+-------+
|    123|    abc|
|    456|    def|
|   null|    zzz|
|       |    hhh|
+-------+-------+

注意不要将其与 Spark 1.5 或更早版本一起使用。在 Spark 1.6 之前,它需要一个笛卡尔积(SPARK-11111 - 快速零安全连接)。

Spark 2.3.0 或更高版本中,您可以在 PySpark 中使用Column.eqNullSafe

numbers_df = sc.parallelize([
    ("123", ), ("456", ), (None, ), ("", )
]).toDF(["numbers"])
letters_df = sc.parallelize([
    ("123", "abc"), ("456", "def"), (None, "zzz"), ("", "hhh")
]).toDF(["numbers", "letters"])
numbers_df.join(letters_df, numbers_df.numbers.eqNullSafe(letters_df.numbers))
+-------+-------+-------+
|numbers|numbers|letters|
+-------+-------+-------+
|    456|    456|    def|
|   null|   null|    zzz|
|       |       |    hhh|
|    123|    123|    abc|
+-------+-------+-------+

SparkR 中的%<=>%

numbers_df <- createDataFrame(data.frame(numbers = c("123", "456", NA, "")))
letters_df <- createDataFrame(data.frame(
  numbers = c("123", "456", NA, ""),
  letters = c("abc", "def", "zzz", "hhh")
))
head(join(numbers_df, letters_df, numbers_df$numbers %<=>% letters_df$numbers))
  numbers numbers letters
1     456     456     def
2    <NA>    <NA>     zzz
3                     hhh
4     123     123     abc

使用SQLSpark 2.2.0+),您可以使用IS NOT DISTINCT FROM

SELECT * FROM numbers JOIN letters 
ON numbers.numbers IS NOT DISTINCT FROM letters.numbers

这也可以与DataFrame API 一起使用:

numbersDf.alias("numbers")
  .join(lettersDf.alias("letters"))
  .where("numbers.numbers IS NOT DISTINCT FROM letters.numbers")
val numbers2 = numbersDf.withColumnRenamed("numbers","num1") //rename columns so that we can disambiguate them in the join
val letters2 = lettersDf.withColumnRenamed("numbers","num2")
val joinedDf = numbers2.join(letters2, $"num1" === $"num2" || ($"num1".isNull &&  $"num2".isNull) ,"outer")
joinedDf.select("num1","letters").withColumnRenamed("num1","numbers").show  //rename the columns back to the original names

基于 K L 的想法,您可以使用 foldLeft 生成连接列表达式:

def nullSafeJoin(rightDF: DataFrame, columns: Seq[String], joinType: String)(leftDF: DataFrame): DataFrame = 
{
  val colExpr: Column = leftDF(columns.head) <=> rightDF(columns.head)
  val fullExpr = columns.tail.foldLeft(colExpr) { 
    (colExpr, p) => colExpr && leftDF(p) <=> rightDF(p) 
  }
  leftDF.join(rightDF, fullExpr, joinType)
}

然后,您可以像这样调用此函数:

aDF.transform(nullSafejoin(bDF, columns, joinType))

补充其他答案,对于 PYSPARK <2.3.0,您将没有 Column.eqNullSafe 也不是 IS NOT DISTINCTFROM。

您仍然可以使用 sql 表达式构建 <=> 运算符以将其包含在连接中,只要为连接查询定义别名:

from pyspark.sql.types import StringType
import pyspark.sql.functions as F
numbers_df = spark.createDataFrame (["123","456",None,""], StringType()).toDF("numbers")
letters_df = spark.createDataFrame ([("123", "abc"),("456", "def"),(None, "zzz"),("", "hhh") ]).
    toDF("numbers", "letters")
joined_df = numbers_df.alias("numbers").join(letters_df.alias("letters"),
                                             F.expr('numbers.numbers <=> letters.numbers')).
    select('letters.*')
joined_df.show()
+-------+-------+
|numbers|letters|
+-------+-------+
|    456|    def|
|   null|    zzz|
|       |    hhh|
|    123|    abc|
+-------+-------+

基于Timothyzhang的想法,可以通过删除重复的列来进一步改进它:

def dropDuplicateColumns(df: DataFrame, rightDf: DataFrame, cols: Seq[String]): DataFrame 
= cols.foldLeft(df)((df, c) => df.drop(rightDf(c)))
def joinTablesWithSafeNulls(rightDF: DataFrame, leftDF: DataFrame, columns: Seq[String], joinType: String): DataFrame = 
{
val colExpr: Column = leftDF(columns.head) <=> rightDF(columns.head)
val fullExpr = columns.tail.foldLeft(colExpr) {
  (colExpr, p) => colExpr && leftDF(p) <=> rightDF(p)
}
val finalDF = leftDF.join(rightDF, fullExpr, joinType)
val filteredDF = dropDuplicateColumns(finalDF, rightDF, columns)
filteredDF
}

尝试以下方法将空行包含在 JOIN 运算符的结果中:

def nullSafeJoin(leftDF: DataFrame, rightDF: DataFrame, columns: Seq[String], joinType: String): DataFrame = {
    var columnsExpr: Column = leftDF(columns.head) <=> rightDF(columns.head)
    columns.drop(1).foreach(column => {
        columnsExpr = columnsExpr && (leftDF(column) <=> rightDF(column))
    })
    var joinedDF: DataFrame = leftDF.join(rightDF, columnsExpr, joinType)
    columns.foreach(column => {
        joinedDF = joinedDF.drop(leftDF(column))
    })
    joinedDF
}

相关内容

  • 没有找到相关文章

最新更新