计算列的值连续更改n次以上的次数,以及更改、分组依据和熊猫条件

  • 本文关键字:条件 熊猫 连续 计算 python pandas
  • 更新时间 :
  • 英文 :


我有一个pandas数据帧:

import pandas as pd
foo = pd.DataFrame({'id': ['a','a','a','a','a','b','b','b','b','b', 'c','c','c','c'], 
'week': [1,2,3,4,5,3,4,5,6,7,1,2,3,4],
'col': [1,1,2,2,1,4,3,3,3,4, 6,6,7,7],
'confidence': ['h','h','h','l','h','h','h','h','h','h', 'h','h','l','l']})

我想计算col的值(连同上一个值(from(和新值(to((变化了多少次(n_changes(,只有当新值连续出现次数大于或等于n时,这些n连续出现次数中至少有一个'h'。我想用id

n=3的情况下,输出应该如下所示:

id from to n_changes
b  4    3  1

因为:

  • 对于b3出现在4之后3次或更多次,并且在这些3 or more consecutive times中至少有一个h

n=2的情况下,输出应该如下所示:

id from to n
a  1    2  1
b  4    3  1

因为:

  • 对于a2出现在1之后2次或更多次,并且在这些2 or more consecutive times中至少有一个h
  • 对于b3出现在4之后2次或更多次,并且在这些2 or more consecutive times中至少有一个h
  • c不出现在输出中,因为即使76之后出现2 or more consecutive times,在这些2 or more consecutive times中也没有至少一个h

有办法做到这一点吗?有什么想法吗?

更新

我已经为n=2尝试过这个

test['next_col'] = test.groupby(['id'])['col'].transform('shift', periods=-1)
test['next_next_col'] = test.groupby(['id'])['col'].transform('shift', periods=-2)
test['next_confidence'] = test.groupby(['id'])['confidence'].transform('shift', periods=-1)
test['next_next_confidence'] = test.groupby(['id'])['confidence'].transform('shift', periods=-2)
test['n_h'] = (test['next_confidence'] == 'h').apply(lambda x: int(x)) + (test['next_next_confidence'] == 'h').apply(lambda x: int(x))
final_test = test[test.eval('next_col == next_next_col and n_h > =1 and col!= next_col')]
final_test['helper'] = 1
final_test['n'] = final_test.groupby(['id','col','next_col'])['helper'].transform('sum')
final_test[['id','col','next_col', 'n']].rename(columns={'col': 'from',
'next_col': 'to'})

作为输出

id  from    to  n
1   a   1   2.0 1
5   b   4   3.0 1

这是正确的。但是有没有更有效的方法呢?

这里有一种方法。关键思想是建立一个run_no值,该值标识连续col值的每次运行(在给定的id内(请注意,没有groupby(...).apply(some_python_function),因此即使在大型df上也可能非常快

# first, let's establish a "run_no" which is distinct for each
# run of same 'col' for a given 'id'.
# we also set a 'is_h' for later .any() operation, plus a few useful columns:
cols = ['id', 'col']
z = df.assign(
from_=df.groupby('id')['col'].shift(1, fill_value=-1),
to=df['col'],
run_no=(df[cols] != df[cols].shift(1)).any(axis=1).cumsum(),
is_h=df['confidence'] == 'h')
# next, make a mask that selects the rows we are interested in
gb = z.groupby(['id', 'run_no'])
mask = (gb.size() >= n) & (gb['is_h'].any() & (gb.first()['from_'] != -1))
# finally, we select according to that mask, and add n_changes:
out = gb.first().loc[mask].reset_index()
out = out.assign(n_changes=out.groupby(['id', 'from_', 'to']).size().values)[['id', 'from_', 'to', 'n_changes']]

结果,n = 2:

>>> out
id  from_  to  n_changes
0  a      1   2          1
1  b      4   3          1

n = 1:

>>> out
id  from_  to  n_changes
0  a      1   2          1
1  a      2   1          1
2  b      4   3          1
3  b      3   4          1

注意:如果您对中间值感兴趣,当然可以检查z(独立于n(和mask(依赖于n(。例如,对于z:

>>> z
id  week  col confidence  from_  to  run_no   is_h
0   a     1    1          h     -1   1       1   True
1   a     2    1          h      1   1       1   True
2   a     3    2          h      1   2       2   True
3   a     4    2          l      2   2       2  False
4   a     5    1          h      2   1       3   True
5   b     3    4          h     -1   4       4   True
6   b     4    3          h      4   3       5   True
7   b     5    3          h      3   3       5   True
8   b     6    3          h      3   3       5   True
9   b     7    4          h      3   4       6   True
10  c     1    6          h     -1   6       7   True
11  c     2    6          h      6   6       7   True
12  c     3    7          l      6   7       8  False
13  c     4    7          l      7   7       8  False

最新更新