快速、高效的熊猫按总和/平均值分组,无需聚合



pandas中执行分组和聚合既简单又快捷。但是,由于lambda函数的原因,执行pandas已经在C中内置的简单分组应用函数而不进行聚合,至少以我的方式执行要慢得多。

# Form data
>>> import numpy as np
>>> import pandas as pd
>>> df = pd.DataFrame(np.random.random((100,3)),columns=['a','b','c'])
>>> df['g'] = np.random.randint(0,3,100)
>>> df.head()
a         b         c  g
0  0.901610  0.643869  0.094082  1
1  0.536437  0.836622  0.763244  1
2  0.647989  0.150460  0.476552  0
3  0.206455  0.319881  0.690032  2
4  0.153557  0.765174  0.377879  1
# groupby and apply and aggregate
>>> df.groupby('g')['a'].sum()
g
0    17.177280
1    15.395264
2    17.668056
Name: a, dtype: float64
# groupby and apply without aggregation
>>> df.groupby('g')['a'].transform(lambda x: x.sum())
0     15.395264
1     15.395264
2     17.177280
3     17.668056
4     15.395264
95    15.395264
96    17.668056
97    15.395264
98    17.668056
99    17.177280
Name: a, Length: 100, dtype: float64

因此,我拥有lambda函数所需的功能,但速度很差。

>>> %timeit df.groupby('g')['a'].sum()
1.11 ms ± 143 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
>>> %timeit df.groupby('g')['a'].transform(lambda x:x.sum())
4.01 ms ± 699 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

这在较大的数据集中成为一个问题。我认为有一个更快,更有效的方法来获得此功能。

可能你正在寻找

df.groupby('g')['a'].transform('sum')

确实比带有 apply 的版本更快:

import numpy as np
import pandas as pd
import timeit
df = pd.DataFrame(np.random.random((100,3)),columns=['a','b','c'])
df['g'] = np.random.randint(0,3,100)
def groupby():
df.groupby('g')['a'].sum()
def transform_apply():
df.groupby('g')['a'].transform(lambda x: x.sum())
def transform():
df.groupby('g')['a'].transform('sum')
print('groupby',timeit.timeit(groupby,number=10))
print('lambda transform',timeit.timeit(transform_apply,number=10))
print('transform',timeit.timeit(transform,number=10))

输出:

groupby 0.010655807999999989
lambda transform 0.029328375000000073
transform 0.01493376600000007

最新更新