不能在 pyspark 中加入两个 RDD



我有两个数据框,称为df1,df2,但是当我尝试加入它时,它无法完成。让我为每个数据帧建立我的架构,并为每个数据帧提供示例输出。

df1
Out[160]: DataFrame[BibNum: string, CallNumber: string, CheckoutDateTime: string, ItemBarcode: string, ItemCollection: string, ItemType: string]
Row(BibNum=u'BibNum', CallNumber=u'CallNumber', CheckoutDateTime=u'CheckoutDateTime', ItemBarcode=u'ItemBarcode', ItemCollection=u'ItemCollection', ItemType=u'ItemType'),
 Row(BibNum=u'1842225', CallNumber=u'MYSTERY ELKINS1999', CheckoutDateTime=u'05/23/2005 03:20:00 PM', ItemBarcode=u'10035249209', ItemCollection=u'namys', ItemType=u'acbk')]

df2    
DataFrame[Author: string, BibNum: string, FloatingItem: string, ISBN: string, ItemCollection: string, ItemCount: string, ItemLocation: string, ItemType: string, PublicationDate: string, Publisher: string, ReportDate: string, Subjects: string, Title: string]
[Row(Author=u'Author', BibNum=u'BibNum', FloatingItem=u'FloatingItem', ISBN=u'ISBN', ItemCollection=u'ItemCollection', ItemCount=u'ItemCount', ItemLocation=u'ItemLocation', ItemType=u'ItemType', PublicationDate=u'PublicationYear', Publisher=u'Publisher', ReportDate=u'ReportDate', Subjects=u'Subjects', Title=u'Title'),
 Row(Author=u"O'Ryan| Ellie", BibNum=u'3011076', FloatingItem=u'Floating', ISBN=u'1481425730| 1481425749| 9781481425735| 9781481425742', ItemCollection=u'ncrdr', ItemCount=u'1', ItemLocation=u'qna', ItemType=u'jcbk', PublicationDate=u'2014', Publisher=u'Simon Spotlight|', ReportDate=u'09/01/2017', Subjects=u'Musicians Fiction| Bullfighters Fiction| Best friends Fiction| Friendship Fiction| Adventure and adventurers Fiction', Title=u"A tale of two friends / adapted by Ellie O'Ryan ; illustrated by Tom Caulfield| Frederick Gardner| Megan Petasky| and Allen Tam.")]

当我尝试使用以下命令连接两个时:

df3=df1.join(df2, df1.BibNum==df2.BibNum)

没有错误,但数据帧看起来像这样,具有重叠的列:

DataFrame[BibNum: string, CallNumber: string, CheckoutDateTime: string, ItemBarcode: string, ItemCollection: string, ItemType: string, Author: string, BibNum: string, FloatingItem: string, ISBN: string, ItemCollection: string, ItemCount: string, ItemLocation: string, ItemType: string, PublicationDate: string, Publisher: string, ReportDate: string, Subjects: string, Title: string]

最后,在我得到 df3(joindataframe( 之后,当我尝试 df3.take(2( 时,错误:list index out of range发生了。因此,我想要通过计算借阅最多的书籍(结帐日期时间(来找出哪些 ItemLocation 可用。

您需要在公共列上联接数据帧,否则它将从 2 个不同的数据帧生成 2 个同名的冲突列。

common_cols = [x for x in df1.columns if x in df2.columns]
df3 = df1.join(df2, on=common_cols, how='outer')

您可以根据需要使用外部联接或左联接。请不要为同一个问题提出多个问题。您已经在 : 处获得活动答案:尝试连接两个表时,发生索引错误:列出索引超出 pyspark 的范围

最新更新