在pyspark中,我们正在尝试加入两个数据帧。我们正在与另一个数据框架一起加入数据框架(Join本身是由JOIN生成的)
result3 = result2.join(df3, df3.mktg_id == result2."max(acct_mkt_id)").select('result2.trans_dt', 'result2.channel','result2.loc_id','result2.meta_reason','max(acct_mkt_id)', 'max(campaign_name)').groupBy('result2.trans_dt', 'result2.channel','result2.loc_id','result2.meta_reason','max(acct_mkt_id)', 'max(campaign_name)').agg(sum("result2.bskt_sum"))
错误消息
SyntaxError: invalid syntax
但是,我遇到语法错误。现在result2.Schema确实具有字段max(acct_mkt_id),但如果我不尝试引用
result3 = result2.join(df3, df3.mktg_id == result2.max(acct_mkt_id)).select('result2.trans_dt', 'result2.channel','result2.loc_id','result2.meta_reason','max(acct_mkt_id)', 'max(campaign_name)').groupBy('result2.trans_dt', 'result2.channel','result2.loc_id','result2.meta_reason','max(acct_mkt_id)', 'max(campaign_name)').agg(sum("result2.bskt_sum"))
错误消息 -
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/hdp/2.5.3.0-37/spark/python/pyspark/sql/dataframe.py", line 842, in __getattr__
"'%s' object has no attribute '%s'" % (self.__class__.__name__, name))
AttributeError: 'DataFrame' object has no attribute 'max'
您能告诉我,我缺少什么或如何绕过此错误。
在这样的情况下,请使用 col
参考列:
import org.apache.spark.sql.functions.col
result2.col("max(acct_mkt_id)")
在Pyspark中,您可以简单地参考以下列:
result2["max(acct_mkt_id)"]