如何交叉联接 2 数据帧



我正在努力获得 2 个数据帧的交叉连接。我正在使用火花 2.0。如何使用 2 个数据框实现交叉连接。?

编辑:

val df=df.join(df_t1, df("Col1")===df_t1("col")).join(df2,joinType=="cross join").where(df("col2")===df2("col2"))

如果不需要指定任何条件,请使用crossJoin

以下是工作代码的摘录:

people.crossJoin(area).show()

升级到最新版本的 spark-sql_2.11 版本 2.1.0 并使用 Dataset

在不使用联接

条件的情况下调用与其他数据帧的联接

请看以下示例。给定人员的第一个数据帧:

+---+------+-------+------+
| id|  name|   mail|idArea|
+---+------+-------+------+
|  1|  Jack|j@j.com|     1|
|  2|Valery|x@v.com|     1|
|  3|  Karl|k@k.com|     2|
|  4|  Nick|n@n.com|     2|
|  5|  Luke|l@f.com|     3|
|  6| Marek|a@b.com|     3|
+---+------+-------+------+

和区域的第二个数据帧:

+------+--------------+
|idArea|      areaName|
+------+--------------+
|     1|Amministration|
|     2|        Public|
|     3|         Store|
+------+--------------+

交叉连接由下式简单给出:

val cross = people.join(area)
+---+------+-------+------+------+--------------+
| id|  name|   mail|idArea|idArea|      areaName|
+---+------+-------+------+------+--------------+
|  1|  Jack|j@j.com|     1|     1|Amministration|
|  1|  Jack|j@j.com|     1|     3|         Store|
|  1|  Jack|j@j.com|     1|     2|        Public|
|  2|Valery|x@v.com|     1|     1|Amministration|
|  2|Valery|x@v.com|     1|     3|         Store|
|  2|Valery|x@v.com|     1|     2|        Public|
|  3|  Karl|k@k.com|     2|     1|Amministration|
|  3|  Karl|k@k.com|     2|     2|        Public|
|  3|  Karl|k@k.com|     2|     3|         Store|
|  4|  Nick|n@n.com|     2|     3|         Store|
|  4|  Nick|n@n.com|     2|     2|        Public|
|  4|  Nick|n@n.com|     2|     1|Amministration|
|  5|  Luke|l@f.com|     3|     2|        Public|
|  5|  Luke|l@f.com|     3|     3|         Store|
|  5|  Luke|l@f.com|     3|     1|Amministration|
|  6| Marek|a@b.com|     3|     1|Amministration|
|  6| Marek|a@b.com|     3|     2|        Public|
|  6| Marek|a@b.com|     3|     3|         Store|
+---+------+-------+------+------+--------------+

您可能需要在 Spark confs 中启用交叉加入。例:

spark = SparkSession
.builder
.appName("distance_matrix")
.config("spark.sql.crossJoin.enabled",True)
.getOrCreate()

并使用类似这样的东西:

df1.join(df2, <condition>)

如果区域数据很小,您可以通过explode而不进行随机播放来做到这一点:

val df1 = Seq(
    (1,"Jack","j@j.com",1),
    (2,"Valery","x@v.com",1),
    (3,"Karl","k@k.com",2),
    (4,"Nick","n@n.com",2),
    (5,"Luke","l@f.com",3),
    (6,"Marek","a@b.com",3)
).toDF("id","name","mail","idArea")
val arr = array(
    Seq(
            (1,"Amministration"),
            (2,"Public"),
            (3,"Store")
        )
    .map(r => struct(lit(r._1).as("idArea"), lit(r._2).as("areaName"))):_*
)
val cross = df1
    .withColumn("d", explode(arr))
    .withColumn("idArea", $"d.idArea")
    .withColumn("areaName", $"d.areaName")
    .drop("d")
df1.show
cross.show

输出

+---+------+-------+------+
| id|  name|   mail|idArea|
+---+------+-------+------+
|  1|  Jack|j@j.com|     1|
|  2|Valery|x@v.com|     1|
|  3|  Karl|k@k.com|     2|
|  4|  Nick|n@n.com|     2|
|  5|  Luke|l@f.com|     3|
|  6| Marek|a@b.com|     3|
+---+------+-------+------+
+---+------+-------+------+--------------+
| id|  name|   mail|idArea|      areaName|
+---+------+-------+------+--------------+
|  1|  Jack|j@j.com|     1|Amministration|
|  1|  Jack|j@j.com|     2|        Public|
|  1|  Jack|j@j.com|     3|         Store|
|  2|Valery|x@v.com|     1|Amministration|
|  2|Valery|x@v.com|     2|        Public|
|  2|Valery|x@v.com|     3|         Store|
|  3|  Karl|k@k.com|     1|Amministration|
|  3|  Karl|k@k.com|     2|        Public|
|  3|  Karl|k@k.com|     3|         Store|
|  4|  Nick|n@n.com|     1|Amministration|
|  4|  Nick|n@n.com|     2|        Public|
|  4|  Nick|n@n.com|     3|         Store|
|  5|  Luke|l@f.com|     1|Amministration|
|  5|  Luke|l@f.com|     2|        Public|
|  5|  Luke|l@f.com|     3|         Store|
|  6| Marek|a@b.com|     1|Amministration|
|  6| Marek|a@b.com|     2|        Public|
|  6| Marek|a@b.com|     3|         Store|
+---+------+-------+------+--------------+

相关内容

  • 没有找到相关文章

最新更新