带有列表列表的数据帧如何将每一行分解为列 - pyspark



>我有一个数据框如下

--------------------+
|                pas1|
+--------------------+
|[[[[H, 5, 16, 201...|
|[, 1956-09-22, AD...|
|[, 1961-03-19, AD...|
|[, 1962-02-09, AD...|
+--------------------+

想要从上面的 4 行中提取每行的几列并创建一个如下所示的数据帧。列名应该来自架构,而不是像column1和column2这样的硬编码。

---------|-----------+
| gender | givenName |
+--------|-----------+
|      a |       b   |
|      a |       b   |
|      a |       b   |
|      a |       b   |
+--------------------+
pas1 - schema
root
|-- pas1: struct (nullable = true)
|    |-- contactList: struct (nullable = true)
|    |    |-- contact: array (nullable = true)
|    |    |    |-- element: struct (containsNull = true)
|    |    |    |    |-- contactTypeCode: string (nullable = true)
|    |    |    |    |-- contactMediumTypeCode: string (nullable = true)
|    |    |    |    |-- contactTypeID: string (nullable = true)
|    |    |    |    |-- lastUpdateTimestamp: string (nullable = true)
|    |    |    |    |-- contactInformation: string (nullable = true)
|    |-- dateOfBirth: string (nullable = true)
|    |-- farePassengerTypeCode: string (nullable = true)
|    |-- gender: string (nullable = true)
|    |-- givenName: string (nullable = true)
|    |-- groupDepositIndicator: string (nullable = true)
|    |-- infantIndicator: string (nullable = true)
|    |-- lastUpdateTimestamp: string (nullable = true)
|    |-- passengerFOPList: struct (nullable = true)
|    |    |-- passengerFOP: struct (nullable = true)
|    |    |    |-- fopID: string (nullable = true)
|    |    |    |-- lastUpdateTimestamp: string (nullable = true)
|    |    |    |-- fopFreeText: string (nullable = true)
|    |    |    |-- fopSupplementaryInfoList: struct (nullable = true)
|    |    |    |    |-- fopSupplementaryInfo: array (nullable = true)
|    |    |    |    |    |-- element: struct (containsNull = true)
|    |    |    |    |    |    |-- type: string (nullable = true)
|    |    |    |    |    |    |-- value: string (nullable = true)

感谢您的帮助

如果要从包含结构的数据帧中提取几列,只需执行以下操作:

from pyspark.sql import SparkSession,Row
spark = SparkSession.builder.appName('Test').getOrCreate()
df = spark.sparkContext.parallelize([Row(pas1=Row(gender='a', givenName='b'))]).toDF()
df.select('pas1.gender','pas1.givenName').show()

相反,如果您想展平数据帧,这个问题应该可以帮助您:如何将嵌套的结构列解包为多个列?

相关内容

最新更新