加入HDFS表之后,尝试删除Pyspark DF中的重复列名称?
嗨,我正在尝试使用200 最终列数加入多个数据集。由于加入时,我无法选择特定的列。有没有办法在加入后删除重复的列。我知道有一种方法可以通过.join方法进行Spark DF,但是我加入的基本桌子不是Spark DF,我试图避免将它们转换为Spark DF。
原始pyspark加入查询以创建火花df#
cust_base=sqlc.sql('''
Select distinct *
FROM db.tbl1 as t1
LEFT JOIN db.tbl2 as t2 ON (t1.acct_id=t2.acct_id)
LEFT JOIN db.tbl3 as t3 ON (t1.cust_id=t3.cust_id)
WHERE t1.acct_subfam_mn IN ('PIA','PIM','IAA')
AND t1.active_acct_ct <> 0
AND t1.efectv_dt = '2018-10-31'
AND (t2.last_change_dt<='2018-10-31' AND (t2.to_dt is null OR t2.to_dt >
'2018-10-31'))
AND (t3.last_change_dt<='2018-10-31' AND (t3.to_dt is null OR t3.to_dt >
'2018-10-31'))
''').registerTempTable("df1")
在检查Cust_id的独特计数时
错误 a=sqlc.sql('''
Select
count(distinct a.cust_id) as CT_ID
From df1
''')
AnalysisException: "Reference 'cust_id' is ambiguous, could be: cust_id#7L,
cust_id#171L.; line 3 pos 15"
This is 'cust_id' field present more than once due to join
我想从生成的加入DF中删除重复的列。预先感谢
我可以帮助编写一个函数以在给定的数据框架中查找重复列。
说以下是具有重复col的数据框:
+------+----------------+----------+------+----------------+----------+
|emp_id|emp_joining_date|emp_salary|emp_id|emp_joining_date|emp_salary|
+------+----------------+----------+------+----------------+----------+
| 3| 2018-12-06| 92000| 3| 2018-12-06| 92000|
+------+----------------+----------+------+----------------+----------+
def finddups(*args):
import collections
dupes = []
for cols in args:
[dupes.append(item) for item, count in collections.Counter(cols).items() if count > 1]
return dupes
>>> duplicatecols = finddups(df.columns)
>>> print duplicatecols
['emp_id', 'emp_joining_date', 'emp_salary']