如何使用 Spark 数据集使用连接查询嵌套数组类型?
目前,我正在分解数组类型并在需要删除匹配数据的数据集上进行联接。但是有没有办法让我可以直接查询它而不会爆炸。
{
"id": 525,
"arrayRecords": [
{
"field1": 525,
"field2": 0
},
{
"field1": 537,
"field2": 1
}
]
}
代码
val df = sqlContext.read.json("jsonfile")
val someDF = Seq(("1"),("525"),("3")).toDF("FIELDIDS")
val withSRCRec =df.select($"*",explode($"arrayRecords")as("exploded_arrayRecords"))
val fieldIdMatchedDF= withSRCRec.as("table1").join(someDF.as("table2"),$"table1.exploded_arrayRecords.field1"===$"table2.FIELDIDS").select($"table1.exploded_arrayRecords.field1")
val finalDf = df.as("table1").join(fieldIdMatchedDF.as("table2"),$"table1.id"===$"table2.id","leftanti")
需要删除具有字段 ID 的 ID 记录
你可以
改用array_except
:
array_except(col1: 列, col2: 列): 列 返回第一个数组中的元素数组,但不返回第二个数组中的元素数组,不返回重复项。未确定结果中元素的顺序
解决方案可能如下所示:
val input = spark.read.option("multiLine", true).json("input.json")
scala> input.show(false)
+--------------------+---+
|arrayRecords |id |
+--------------------+---+
|[[525, 0], [537, 1]]|525|
+--------------------+---+
// Since field1 is of type int, let's convert the ids to ints
// You could do this in Scala directly or in Spark SQL's select
val fieldIds = Seq("1", "525", "3").toDF("FIELDIDS").select($"FIELDIDS" cast "int")
// Collect the ids for array_except
val ids = fieldIds.select(collect_set("FIELDIDS") as "ids")
// The trick is to crossJoin (it is cheap given 1-row ids dataset)
val solution = input
.crossJoin(ids)
.select(array_except($"arrayRecords.field1", $"ids") as "unmatched")
scala> solution.show
+---------+
|unmatched|
+---------+
| [537]|
+---------+
您可以根据数据集注册临时表并使用 SQL 进行查询。它会是这样的:
someDs.registerTempTable("sometable");
sql("SELECT array['field'] FROM sometable");