Pyspark在数据框架内合并包装



当前的pyspark dataframe具有此结构(Col2的包装列表列表):

+---+---------------------------------------------------------------------+  
|id |col2                                                                 |   
+---+---------------------------------------------------------------------+  
|a  |[WrappedArray(code2), WrappedArray(code1, code3)]                    |  
+---+---------------------------------------------------------------------+  
|b  |[WrappedArray(code5), WrappedArray(code6, code8)]                    |  
+---+---------------------------------------------------------------------+  

这是我想拥有的结构(Col2的扁平列表):

+---+---------------------------------------------------------------------+  
|id |col2                                                                 |   
+---+---------------------------------------------------------------------+  
|a  |[code2,code1, code3)]                                                |  
+---+---------------------------------------------------------------------+  
|b  |[code5,code6, code8]                                                 |  
+---+---------------------------------------------------------------------+  

,但我不确定如何进行这种转变。我试图做一个扁平图,但这似乎没有用。有任何建议吗?

您可以使用2种方法,UDF和RDD进行此操作。这是示例: -

df = sqlContext.createDataFrame([
    ['a',  [['code2'],['code1', 'code3']]],  
    ['b',  [['code5','code6'], ['code8']]]
], ["id", "col2"])  
df.show(truncate = False)
+---+-------------------------------------------------+
|id |col2                                             |
+---+-------------------------------------------------+
|a  |[WrappedArray(code2), WrappedArray(code1, code3)]|
|b  |[WrappedArray(code5, code6), WrappedArray(code8)]|
+---+-------------------------------------------------+

rdd: -

df.map(lambda row:(row[0], reduce(lambda x,y:x+y, row[1]))).toDF().show(truncate=False)
+---+---------------------+
|_1 |_2                   |
+---+---------------------+
|a  |[code2, code1, code3]|
|b  |[code5, code6, code8]|
+---+---------------------+

UDF: -

from pyspark.sql import functions as F
import pyspark.sql.types as T
def fudf(val):
    #emlist = []
    #for item in val:
    #    emlist += item
    #return emlist
    return reduce (lambda x, y:x+y, val)
flattenUdf = F.udf(fudf, T.ArrayType(T.StringType()))
df.select("id", flattenUdf("col2").alias("col2")).show(truncate=False)
+---+---------------------+
|id |col2                 |
+---+---------------------+
|a  |[code2, code1, code3]|
|b  |[code5, code6, code8]|
+---+---------------------+

相关内容

  • 没有找到相关文章

最新更新