使用 Python Spark 从 Hadoop Table 的 Json 对象中提取所有键



我有一个名为table_with_json_string的Hadoop表。

例如:

+-----------------------------------+---------------------------------+
|      creation_date                |        json_string_colum        |
+-----------------------------------+---------------------------------+
| 2020-01-29                        |  "{keys : {1 : 'a', 2 : 'b' }}" |
+-----------------------------------+---------------------------------+

期望的输出:

+-----------------------------------+----------------------------------+----------+
|      creation_date                |         json_string_colum        |   keys   |
+-----------------------------------+----------------------------------+----------+
| 2020-01-29                        |  "{keys : {1 : 'a', 2 : 'b' }}"  |    1     |
| 2020-01-29                        |  "{keys : {1 : 'a', 2 : 'b' }}"  |    2     |
+-----------------------------------+----------------------------------+----------+

我尝试:

from pyspark.sql import functions as sf
from pyspark.sql import types as st
from pyspark.sql.functions import from_json, col,explode
from pyspark.sql.types import StructType, StructField, StringType,MapType
schema = StructType([StructField("keys",
MapType(StringType(),StringType()),True)])
df = spark.table('table_with_json_string').select(col("creation_date"),col("json_string_colum"))
df = df.withColumn("map_json_column", from_json("json_string_colum",schema))
df.show(1,False)
+--------------------+-------------------------------------+----------------------------------+
|       creation_date|        json_string_colum            |    map_json_column               |
+--------------------+-------------------------------------+----------------------------------+
|   2020-01-29       |     "{keys : {1 : 'a', 2 : 'b' }}"  |    [Map(1 ->'a',2 ->'b')]        |
+--------------------+-------------------------------------+----------------------------------+

1 - 如何从这个MapType对象中获取密钥?我确实知道我需要使用explode函数来达到我想要的表格式,但我仍然不知道如何将 JSON 对象的键提取为数组格式。

如果更容易达到我的目标,我愿意接受其他方法。

基于您到目前为止所做的工作,您可以按如下方式获取密钥:

from pyspark.sql import functions as f
df = (df
.withColumn("map_json_column", f.from_json("json_string_colum",schema))
.withColumn("keys", f.map_keys("map_json_column.keys"))
.drop("map_json_column")
.withColumn("keys", f.explode("keys"))
)

结果:

+-------------+--------------------+----+
|creation_date|   json_string_colum|keys|
+-------------+--------------------+----+
|   2020-01-29|{"keys" : {"1" : ...|   1|
|   2020-01-29|{"keys" : {"1" : ...|   2|
+-------------+--------------------+----+

以下是得出上述答案的详细步骤:

>>> from pyspark.sql import functions as f
>>> df.show()
+-------------+--------------------+
|creation_date|   json_string_colum|
+-------------+--------------------+
|   2020-01-29|{"keys" : {"1" : ...|
+-------------+--------------------+
>>> df.withColumn("map_json_column", f.from_json("json_string_colum",schema)).show()
+-------------+--------------------+------------------+
|creation_date|   json_string_colum|   map_json_column|
+-------------+--------------------+------------------+
|   2020-01-29|{"keys" : {"1" : ...|[[1 -> a, 2 -> b]]|
+-------------+--------------------+------------------+
>>> df.withColumn("map_json_column", f.from_json("json_string_colum",schema)).withColumn("keys", f.map_keys("map_json_column.keys")).show()
+-------------+--------------------+------------------+------+
|creation_date|   json_string_colum|   map_json_column|  keys|
+-------------+--------------------+------------------+------+
|   2020-01-29|{"keys" : {"1" : ...|[[1 -> a, 2 -> b]]|[1, 2]|
+-------------+--------------------+------------------+------+
>>> df.withColumn("map_json_column", f.from_json("json_string_colum",schema)).withColumn("keys", f.map_keys("map_json_column.keys")).drop("map_json_column").show()
+-------------+--------------------+------+
|creation_date|   json_string_colum|  keys|
+-------------+--------------------+------+
|   2020-01-29|{"keys" : {"1" : ...|[1, 2]|
+-------------+--------------------+------+
>>> df.withColumn("map_json_column", f.from_json("json_string_colum",schema)).withColumn("keys", f.map_keys("map_json_column.keys")).drop("map_json_column").withColumn("keys", f.explode("keys")).show()
+-------------+--------------------+----+
|creation_date|   json_string_colum|keys|
+-------------+--------------------+----+
|   2020-01-29|{"keys" : {"1" : ...|   1|
|   2020-01-29|{"keys" : {"1" : ...|   2|
+-------------+--------------------+----+

需要明确的是,我上面使用map_keys函数在 PySpark 2.3+ 中可用

最新更新