使用 Java 合并来自多个 RDD 的数据



我有 3 个 CSV 文件,如下所示,尝试创建 RDD 并将 RDD 组合成我可以应用过滤器的最终输出。我不确定从哪里开始 有了这个。有什么建议吗?

JavaRDD<String> file1 = sc.textFile("D:\tmp\file1.csv");
JavaRDD<String> file2 = sc.textFile("D:\tmp\file2.csv");
JavaRDD<String> file3 = sc.textFile("D:\tmp\file3.csv");
JavaRDD<String> combRDD = file1.union(file2).union(file3); //doesn't give expected output

CSV 文件1

"user","source_ip","action","type"
"abc","10.0.0.1","login","ONE"
"xyz","10.0.1.1","login","ONE"
"abc","10.0.0.1","playing","ONE"
"def","10.1.0.1","login","ONE"

CSV 文件2

"user","url","type"
"abc","/test","TWO"
"xyz","/wonder","TWO"

CSV 文件3

"user","total_time","type","status"
"abc","5min","THREE","true"
"xyz","2min","THREE","fail"

最终预期产出

"user","source_ip","action","type","url","total_time","status"
"abc","10.0.0.1","login","ONE","","",""
"xyz","10.0.1.1","login","ONE","","",""
"abc","10.0.0.1","playing","ONE","","",""
"def","10.1.0.1","login","ONE","","",""
"abc","","","TWO","/test","",""
"xyz","","","TWO","/wonder","",""
"abc","","","THREE","","5min","true"
"xyz","","","THREE","","2min","fail"

每个csv文件每天都以相同的格式生成,所以我想从带有*.csv的特定文件夹中读取它们以构建RDD

如果你有SparkSession对象作为spark

spark.read.option("header", "true").csv("file1.csv").join(
spark.read.option("header", "true").csv("file2.csv"), "user"
).join(
spark.read.option("header", "true").csv("file3.csv"), "user"
).write.csv("some_output");

最新更新