考虑这个JSON输入(为了可读性,以多行形式显示,但实际输入文档是一行CR分隔的(:
{
"common": { "type":"A", "date":"2020-01-01T12:00:00" },
"data": {
"name":"Dave",
"pets": [ "dog", "cat" ]
}
}
{
"common": { "type": "B", "date":"2020-01-01T12:00:00" },
"data": {
"whatever": { "X": {"foo":3}, "Y":"bar" },
"favoriteInts": [ 0, 1, 7]
}
}
我熟悉json-schema
以及我可以描述data
的方式子结构可以是name,pets
或whatever,favoriteInts
。我们使用common.type
字段在运行时识别类型。
这在SPARK模式定义中可能吗沿着以下路线的初始实验:
schema = StructType([
StructField("common", StructType(common_schema)), # .. because the type is consistent
StructField("data", StructType()) # attempting to declare a "generic" struct
])
df = spark.read.option("multiline", "true").json(source, schema)
不起作用;在读取data
结构包含的位置时,除了这个特定示例中的2个字段之外,我们得到:
+--------------------+----+
| common|data|
+--------------------+----+
|{2020-01-01T12:00...| {}|
+--------------------+----+
并且尝试提取任何命名字段产生CCD_ 7。离开";泛型结构";从schema
def中完全生成一个没有任何名为data
的字段的数据帧,更不用说其中的字段了。
除此之外,我最终想做这样的事情:
df = spark.read.json(source)
def processA(frame):
frame.select( frame.data.name ) # we KNOW name exists for type A
...
def processB(frame):
frame.select( frame.data.favoriteInts ) # we KNOW favoriteInts exists for type B
...
processA( df.filter(df.common.type == "A") )
processB( df.filter(df.common.type == "B") )
您可以在结构中使用嵌套和可为null的类型(通过指定True
(来适应不确定性。
from pyspark.sql.types import StructType, StringType, ArrayType, StructField, IntegerType
data_schema = StructType([
# Type A related attributes
StructField("name",StringType(),True), # True implies nullable
StructField("pets",ArrayType(StringType()),True),
# Type B related attributes
StructField("whatever",StructType([
StructField("X",StructType([
StructField("foo",IntegerType(),True)
]),True),
StructField("Y",StringType(),True)
]),True), # True implies nullable
StructField("favoriteInts",ArrayType(IntegerType()),True),
])
schema = StructType([
StructField("common", StructType(common_schema)), # .. because the type is consistent
StructField("data", data_schema)
])