我有以下数据框架:
field_A | field_B | field_C | field_D
cat | 12 | black | 11
dog | 128 | white | 19
dog | 35 | yellow | 20
dog | 21 | brown | 4
bird | 10 | blue | 7
cow | 99 | brown | 34
是否可以过滤掉field_A中有重复值的行?也就是说,我希望最终的数据帧是:
field_A | field_B | field_C | field_D
cat | 12 | black | 11
bird | 10 | blue | 7
cow | 99 | brown | 34
这在pyspark中是可能的吗?谢谢你!
创建数据
rdd = sc.parallelize([(0,1), (0,10), (0,20), (1,2), (2,1), (3,5), (3,18), (4,15), (5,18)])
t = sqlContext.createDataFrame(rdd, ["id", "score"])
t.collect()
[行(id = 0,得分= 1),行(id = 0,得分= 10),行(id = 0,得分= 20),行(id = 1,分数= 2),行(id = 2分= 1),行(id = 3,得分= 5)行(id = 3,得分= 18),行(id = 4分= 15)行(id = 5,得分= 18)]
获取给定id的行数
idCounts = t.groupBy('id').count()
连接idCounts到原始数据帧
out = t.join(idCounts,'id','left_outer').filter('count = 1').select(['id', 'score'])
out.collect
[行(id = 1,分数= 2),行(id = 2分= 1),行(id = 4分= 15)行(id = 5,得分= 18)]