Pandas GroupBy & Filter on Count



我想捕获一些出现次数超过特定阈值的分类值:

df:
ticket_id,  category,  amount --> some more columns
1020        cat1       1000
1022        cat1       55
1023        cat1       12291
1120        cat2       140
1121        cat3       1250 
^
|
|
Way more rows with mostly (1020) cat5, (98) cat1, cat3, cat4 and no cat2.
>>>> df.groupby('category')['amount'].count()
category
cat1       100
cat2       1
cat3       6
cat4       2
cat5       1020

我想在列表中获取计数> 20 的类别。目前我正在做:

>>>> t = test.groupby('disposition')['fine_amount'].agg(['count','mean'])
>>>> t[t['count']>10].index.values
array(['cat1','cat5'])

现在这行得通了,但我相信可以做得更短:为了引用count column,我至少需要 2 个聚合函数,此外,我需要 1 个变量和 2 行。 我希望像这样:

>>>> groupby('category')['amount'].apply(lambda x: x.count() > 10)

但这会产生:

cat1    100   True
etc
>>>> groupby('category')['amount'].apply(lambda x: x[x.count() > 10])
Fails completely
>>>> groupby('category')['amount'].count().nlargest(2)
gives the correct list but in this case because I know there are 2 elements.

什么是最优雅/最高性能的解决方案?

您可以通过仅使用初始 groupby 的结果并索引来使其更清晰:

设置

categories = ['cat1', 'cat2', 'cat3', 'cat4', 'cat5']
dct = {'categories': [random.choice(categories) for i in range(75)], 'amount': [random.randint(1, 10) for _ in range(75)]}
df = pd.DataFrame(dct)

groupbycount

s = df.groupby('categories').amount.count()
# categories
# cat1    12
# cat2    10
# cat3    21
# cat4    17
# cat5    15
# Name: amount, dtype: int64

漂亮干净的索引:

s[s > 20].index.tolist()
['cat3']

您可以在布尔索引中使用 lambda express:

借用@user2483203设置:

np.random.seed(123)
categories = ['cat1', 'cat2', 'cat3', 'cat4', 'cat5']
dct = {'categories': [np.random.choice(categories) for i in range(100)], 'amount': [np.random.randint(1, 10) for _ in range(100)]}
df = pd.DataFrame(dct)

使用value_counts和布尔索引:

df['categories'].value_counts()[lambda x: x>20].index.tolist()

输出:

['cat4', 'cat2']

最新更新