我有一个Python字典:
dic = {
(u'aaa',u'bbb',u'ccc'):((0.3, 1.2, 1.3, 1.5), 1.4, 1),
(u'kkk',u'ggg',u'ccc',u'sss'):((0.6, 1.2, 1.7, 1.5), 1.4, 2)
}
我想将此字典转换为带有列的Spark数据帧:
['key', 'val_1', 'val_2', 'val_3', 'val_4', 'val_5', 'val_6']
示例行 (1(:
key | val_1 |val_2 | val_3 | val_4 | val_5| val_6|
u'aaa',u'bbb',u'ccc' | 0.3 |1.2 |1.3 |1.5 |1.4 |1 |
提前谢谢你
提取项目,将密钥转换为list
并将所有内容合并为一个tuple
:
df = sc.parallelize([
(list(k), ) +
v[0] +
v[1:]
for k, v in dic.items()
]).toDF(['key', 'val_1', 'val_2', 'val_3', 'val_4', 'val_5', 'val_6'])
df.show()
## +--------------------+-----+-----+-----+-----+-----+-----+
## | key|val_1|val_2|val_3|val_4|val_5|val_6|
## +--------------------+-----+-----+-----+-----+-----+-----+
## | [aaa, bbb, ccc]| 0.3| 1.2| 1.3| 1.5| 1.4| 1|
## |[kkk, ggg, ccc, sss]| 0.6| 1.2| 1.7| 1.5| 1.4| 2|
## +--------------------+-----+-----+-----+-----+-----+-----+