RDD scala Spark 中的完全外部连接



>我在下面有两个文件:
文件1

0000003 杉山______ 26 F
0000005 崎村______ 50 F
0000007 梶川______ 42 F

文件2

0000005 82 79 16 21 80
0000001 46 39 8 5 21
0000004 58 71 20 10 6
0000009 60 89 33 18 6
0000003 30 50 71 36 30
0000007 50 2 33 15 62

现在,我希望联接列在字段 1 中具有相同的值。
我想要这样的东西:

0000005 崎村______ 50 F 82 79 16 21 80
0000003 杉山______ 26 F 30 50 71 36 30
0000007 梶川______ 42 F 50 2  33 15 62

您可以使用数据框连接概念而不是 RDD 连接。那很容易。您可以在下面参考我的示例代码。希望对您有所帮助。我正在考虑您的数据与您上面提到的格式相同。如果是CSV或任何其他格式,则可以跳过步骤2并根据数据格式更新步骤1。如果您需要RDD格式的输出,则可以使用Step-5,否则您可以根据代码片段中提到的注释忽略它。
我修改了数据(如A_____、B_____C____(只是为了可读性。

//Step1: Loading file1 and file2 to corresponding DataFrame in text format
val df1  = spark.read.format("text").load("<path of file1>")
val df2  = spark.read.format("text").load("<path of file2>")
//Step2: Spliting  single column "value" into multiple column for join Key
val file1 = ((((df1.withColumn("col1", split($"value", " ")(0)))
                        .withColumn("col2", split($"value", " ")(1)))
                        .withColumn("col3", split($"value", " ")(2)))
                        .withColumn("col4", split($"value", " ")(3)))
                        .select("col1","col2", "col3", "col4")
/* 
+-------+-------+----+----+                                                     
|col1   |col2   |col3|col4|
+-------+-------+----+----+
|0000003|A______|26  |F   |
|0000005|B______|50  |F   |
|0000007|C______|42  |F   |
+-------+-------+----+----+
*/
val file2 =   ((((((df2.withColumn("col1", split($"value", " ")(0)))
                            .withColumn("col2", split($"value", " ")(1)))
                            .withColumn("col3", split($"value", " ")(2)))
                            .withColumn("col4", split($"value", " ")(3)))
                            .withColumn("col5", split($"value", " ")(4)))
                            .withColumn("col6", split($"value", " ")(5)))
                            .select("col1","col2", "col3", "col4","col5","col6")
/*
+-------+----+----+----+----+----+
|col1   |col2|col3|col4|col5|col6|
+-------+----+----+----+----+----+
|0000005|82  |79  |16  |21  |80  |
|0000001|46  |39  |8   |5   |21  |
|0000004|58  |71  |20  |10  |6   |
|0000009|60  |89  |33  |18  |6   |
|0000003|30  |50  |71  |36  |30  |
|0000007|50  |2   |33  |15  |62  |
+-------+----+----+----+----+----+
*/
//Step3: you can do alias to refer column name with aliases to  increase readablity
val file01 = file1.as("f1")
val file02 = file2.as("f2")
//Step4: Joining files on Key
file01.join(file02,col("f1.col1") === col("f2.col1"))
/*
+-------+-------+----+----+-------+----+----+----+----+----+                    
|col1   |col2   |col3|col4|col1   |col2|col3|col4|col5|col6|
+-------+-------+----+----+-------+----+----+----+----+----+
|0000005|B______|50  |F   |0000005|82  |79  |16  |21  |80  |
|0000003|A______|26  |F   |0000003|30  |50  |71  |36  |30  |
|0000007|C______|42  |F   |0000007|50  |2   |33  |15  |62  |
+-------+-------+----+----+-------+----+----+----+----+----+
*/
// Step5: if you want file data in RDD format the  you can use below command
file01.join(file02,col("f1.col1") === col("f2.col1")).rdd.collect
/* 
Array[org.apache.spark.sql.Row] = Array([0000005,B______,50,F,0000005,82,79,16,21,80], [0000003,A______,26,F,0000003,30,50,71,36,30], [0000007,C______,42,F,0000007,50,2,33,15,62])
*/

我找到了解决方案,这是我的代码:

val rddPair1 = logData1.map { x =>
var data = x.split(" ")
var index = 0
var value=""
var key = data(index)
    for( i <- 0 to data.length-1){
        if(i!=index){
            value+= data(i)+" "
        }
    }
new Tuple2(key, value.trim)
}

val rddPair2 = logData2.map { x =>
var data = x.split(" ")
var index = 0
var value=""
var key = data(index)
    for( i <- 0 to data.length-1){
        if(i!=index){
            value+= data(i)+" "
        }
    }
new Tuple2(key, value.trim)
}
rddPair1.join(rddPair2).collect().foreach(f =>{
println(f._1+" "+f._2._1+" "+f._2._2
)})
}

结果:

0000003 杉山______ 26 F 30 50 71 36 300000005 崎村______ 50 F 82 79 16 21 800000007 梶川______ 42 F 50 2 33 15 62

最新更新