pyspark来分割数组并获取键值



我有包含键值对字符串数组的数据帧,我只想从键值中获取键键值对的数量对于每一行都是动态的,命名约定也不同。

Sample Input
------+-----+-----+-----+---------------------
|ID    |data| value                          |
+------+-----+-----+--------+-----------------
|e1    |D1  |["K1":"V1","K2":"V2","K3":"V3"] |
|e2    |D2  |["K1":"V1","K3":"V3"]           |
|e3    |D1  |["K1":"V1","K2":"V2"]           |
|e4    |D3  |["K2":"V2","K1":"V1","K3":"V3"] |
+------+-----+-----+--------+-----------------

Expected Result:
------+-----+-----+------
|ID    |data| value     |
+------+-----+-----+----|
|e1    |D1  |[K1|K2|K3] |
|e2    |D2  |[K1|K3]    |
|e3    |D1  |[K1|K2]    |
|e4    |D3  |[K2|K1|K3] |
+------+-----+-----+-----

对于Spark 2.4+,请使用transform函数。

对于数组的每个元素,使用substring_index对键进行子串运算,并使用trim函数修剪前导和尾随引号。

df.show(truncate=False)
#+---+----+------------------------------------+
#|ID |data|value                               |
#+---+----+------------------------------------+
#|e1 |D1  |["K1":"V1", "K2": "V2", "K3": "V3"] |
#|e2 |D2  |["K1": "V1", "K3": "V3"]            |
#|e3 |D1  |["K1": "V1", "K2": "V2"]            |
#|e4 |D3  |["K2": "V2", "K1": "V1", "K3": "V3"]|
#+---+----+------------------------------------+    
new_value = """ transform(value, x -> trim(BOTH '"' FROM substring_index(x, ':', 1))) """
df.withColumn("value", expr(new_value)).show()
#+---+----+------------+
#|ID |data|value       |
#+---+----+------------+
#|e1 |D1  |[K1, K2, K3]|
#|e2 |D2  |[K1, K3]    |
#|e3 |D1  |[K1, K2]    |
#|e4 |D3  |[K2, K1, K3]|
#+---+----+------------+

如果您希望结果是由|分隔的字符串,您可以像这样使用array_join

df.withColumn("value", array_join(expr(new_value), "|")).show()
#+---+----+--------+
#|ID |data|value   |
#+---+----+--------+
#|e1 |D1  |K1|K2|K3|
#|e2 |D2  |K1|K3   |
#|e3 |D1  |K1|K2   |
#|e4 |D3  |K2|K1|K3|
#+---+----+--------+

您可以将值拆分为包含键和值的数组。

df.withColumn("keys", expr('transform(value, keyValue -> trim(split(keyValue, ":")[0]))')).drop("value")

最新更新