基于多个熊猫列的条件累积总和



>我有一个包含多个"堆栈"及其相应"长度"的dataframe

df = pd.DataFrame({'stack-1-material': ['rock', 'paper', 'paper', 'scissors', 'rock'], 'stack-2-material': ['rock', 'paper', 'rock', 'paper', 'scissors'], 'stack-1-length': [3, 1, 1, 2, 3], 'stack-2-length': [3, 1, 3, 1, 2]})
stack-1-material stack-2-material  stack-1-length  stack-2-length
0             rock             rock               3               3
1            paper            paper               1               1
2            paper             rock               1               3
3         scissors            paper               2               1
4             rock         scissors               3               2

我正在尝试为每种材料创建一个单独的列,该列跟踪长度的累积总和,无论它们是哪个"堆栈"。我尝试使用groupby但只能将累积总和放入一列中。这是我正在寻找的:

stack-1-material stack-2-material  stack-1-length  stack-2-length  rock_cumsum  paper_cumsum  scissors_cumsum
0             rock             rock               3               3            6             0                0
1            paper            paper               1               1            6             2                0
2            paper             rock               1               3            9             3                0
3         scissors            paper               2               1            9             4                2
4             rock         scissors               3               2           12             4                4 

>您可以使用列材料作为列长度上的掩码,然后沿着列sumcumsum,对于每种材料。

#separate material and length
material = df.filter(like='material').to_numpy()
lentgh = df.filter(like='length')
# get all unique material
l_mat = np.unique(material)
# iterate over nique materials
for mat in l_mat:
df[f'{mat}_cumsum'] = lentgh.where(material==mat).sum(axis=1).cumsum()
print(df)
stack-1-material stack-2-material  stack-1-length  stack-2-length  
0             rock             rock               3               3   
1            paper            paper               1               1   
2            paper             rock               1               3   
3         scissors            paper               2               1   
4             rock         scissors               3               2   
rock_cumsum  paper_cumsum  scissors_cumsum  
0          6.0           0.0              0.0  
1          6.0           2.0              0.0  
2          9.0           3.0              0.0  
3          9.0           4.0              2.0  
4         12.0           4.0              4.0  

首先,反转列名,这样我们就可以使用wide_to_long来重塑数据帧。

然后获取材料中的cumsum并确定每行每种材料的最大值。然后我们可以reshape这个并ffill并将剩余的NaN替换为 0 并连接回原始。

df.columns = ['-'.join(x[::-1]) for x in df.columns.str.rsplit('-', n=1)]
res = (pd.wide_to_long(df.reset_index(), stubnames=['material', 'length'], 
i='index', j='whatever', suffix='.*')
.sort_index(level=0))
#                material  length
#index whatever                  
#0     -stack-1      rock       3
#      -stack-2      rock       3
#1     -stack-1     paper       1
#      -stack-2     paper       1
#2     -stack-1     paper       1
#      -stack-2      rock       3
#3     -stack-1  scissors       2
#      -stack-2     paper       1
#4     -stack-1      rock       3
#      -stack-2  scissors       2
res['csum'] = res.groupby('material')['length'].cumsum()
res = (res.groupby(['index', 'material'])['csum'].max()
.unstack(-1).ffill().fillna(0, downcast='infer')
.add_suffix('_cumsum'))
df = pd.concat([df, res], axis=1)
<小时 />
material-stack-1 material-stack-2  length-stack-1  length-stack-2  paper_cumsum  rock_cumsum  scissors_cumsum
0             rock             rock               3               3             0            6                0
1            paper            paper               1               1             2            6                0
2            paper             rock               1               3             3            9                0
3         scissors            paper               2               1             4            9                2
4             rock         scissors               3               2             4           12                4

最新更新