如果在第一列中存在第二列值,则可以从两个列中选择值的SPARK SQL查询



输入 -

col_a   col_b
 A        B
 D        B
 B        E
 C        A

我正在尝试使用SparkSQL以随后的方式获取输出,但是我无法使用NOT EXITS/左外连接获得所需的输出。请帮助我获得以下输出。

col_a    col_b
 A         B
 D         B
 C         A

如果Col_a中存在col_b值,我想从两个表中获取值。

假设您的列不太大,我会做这样的事情:

scala> val df = Seq(("A", "B"), ("D", "B"), ("B", "E"), ("C", "A")).toDF("col_a", "col_b")
df: org.apache.spark.sql.DataFrame = [col_a: string, col_b: string]
scala> df.show
+-----+-----+
|col_a|col_b|
+-----+-----+
|    A|    B|
|    D|    B|
|    B|    E|
|    C|    A|
+-----+-----+
scala> import org.apache.spark.sql.Row
import org.apache.spark.sql.Row
scala> import scala.collection.mutable.HashSet
import scala.collection.mutable.HashSet
scala> val col_a_vals = df.rdd.map{case Row(a: String, b: String) => a}.collect.toSeq
col_a_vals: Seq[String] = WrappedArray(A, D, B, C)
scala> val col_a_set = HashSet(col_a_vals :_*)
col_a_set: scala.collection.mutable.HashSet[String] = Set(B, C, D, A)
scala> val broad_set = sc.broadcast(col_a_set)
broad_set: org.apache.spark.broadcast.Broadcast[scala.collection.mutable.HashSet[String]] = Broadcast(56)
scala> val contains_col_a = udf((value: String) => broad_set.value.contains(value))
contains_col_a: org.apache.spark.sql.UserDefinedFunction = UserDefinedFunction(<function1>,BooleanType,List(StringType))
scala> df.filter(contains_col_a($"col_b")).show
+-----+-----+
|col_a|col_b|
+-----+-----+
|    A|    B|
|    D|    B|
|    C|    A|
+-----+-----+

相关内容

  • 没有找到相关文章