使用最后两列作为分区,将Spark DataFrame转换为Pyspark中的蜂巢分区的Create Table



我在pyspark(2.3(中有一个dataframe,我需要从中生成一个分区的create table语句,以通过spark.sql((运行以使其兼容。

Sample Dataframe:
 final.printSchema()
root
 |-- name: string (nullable = true)
 |-- age: string (nullable = true)
 |-- value: long (nullable = true)
 |-- date: string (nullable = true)
 |-- subid: string( nullable=true)

脚本应读取数据框并创建下表,并将最后两列视为分区列。

`create table schema.final( name string ,age string  ,value long ) 
     partitioned by (date string , subid string) stored as parquet;`

上述Pyspark解决方案的任何帮助真的很棒

在这里,通过迭代架构并生成蜂巢SQL:

,一种方法
from pyspark.sql.types import StructType, StructField, StringType, LongType
schema = StructType([
  StructField('name', StringType()),
  StructField('age', StringType()),
  StructField('value', LongType()),
  StructField('date', StringType()),
  StructField('subid', StringType())
])
hiveCols = ""
hivePartitionCols = ""
for idx, c in enumerate(schema):
  # populate hive schema
  if(idx < len(schema[:-2])):
    hiveCols += "{0} {1}".format(c.name, c.dataType.simpleString())
    if(idx < len(schema[:-2]) - 1):
      hiveCols += ","

  # populate hive partition
  if(idx >= len(schema) - 2):
    hivePartitionCols += "{0} {1}".format(c.name, c.dataType.simpleString())
    if(idx < len(schema) - 1):
      hivePartitionCols += ","
hiveCreateSql = "create table schema.final({0}) partitioned by ({1}) stored as parquet".format(hiveCols, hivePartitionCols)
# create table schema.final(name string,age string,value bigint) partitioned by (date string,subid string) stored as parquet
spark.sql(hiveCreateSql)

最新更新