在 pyspark 中将字典列拆分为多个列


Column Names
Production_uint_id,batch_id,items_produced,items_discarded
Data:
P188    gv962   {'scissor': 141, 'paper': 274, 'rock': 218}
{'scissor': 14,'paper': 135, 'rock': 24}
P258    mr005   {'scissor': 151, 'paper': 143, 'rock': 225}
{'scissor': 24, 'paper': 60, 'rock': 17}

法典:

from pyspark.sql.types import *
sc = spark.sparkContext
production_rdd = sc.textFile("/Production_logs.tsv")
production_parts = production_rdd.map(lambda l: l.split("t"))
production = production_parts.map(lambda p: (p[0], p[1], p[2], p[3].strip()))
schemaStringProduction = "production_unit_id batch_id items_produced items_discarded"
fieldsProduction = [StructField(field_name, StringType(), True) for field_name in schemaStringProduction.split()]
schemaProduction = StructType(fieldsProduction)
schemaProductionDF = spark.createDataFrame(production, schemaProduction)
I am Trying to explode
exploding = schemaProductionDF.select("production_unit_id",  explode("items_produced").alias("item_p", "item_p_count"), "items_discarded")

收到此错误:

pyspark.sql.utils.AnalysisException: u"cannot resolve 'explode(`items_produced`)' due to data type mismatch: 
input to function explode should be array or map type, not string;

请帮忙

Explode 是 UDTF 函数,它将为每个数组元素返回新的 Row。对于爆炸:在 PySpark 中爆炸

对于您的问题,请尝试以下代码:

from pyspark import SparkContext
from pyspark.sql import Row
sc= SparkContext.getOrCreate()
import pandas as pd
rdd1=sc.textFile("D:MOCK_DATA*_dict.txt")
lineRDD=rdd1.map(lambda line: line.split("t"))
header="Production_uint_id,batch_id,items_produced,items_discarded"
col_name=[x.encode("utf-8") for x in header.split(',')] 
production = lineRDD.map(lambda p: (eval(p[0]), eval(p[1]), eval(p[2]), eval(p[3]).strip()))
flatRDD=lineRDD.map(lambda a : ((a[0],a[1],eval(a[2]).values(),eval(a[3]).values())))
DF1=flatRDD.toDF(col_name)
DF1.printSchema()
from pyspark.sql import functions as f
DF2=DF1
lst='scissor,paper,rock'
col_lst='items_produced,items_discarded'
for col_ele in col_lst.split(","):
    count=0
    for i in lst.split(','):
        DF2=DF2.withColumn(col_ele+'.'+i, DF2[col_ele][count])
        count=count+1
DF2.show()

相关内容

  • 没有找到相关文章

最新更新