鉴于我有一个pandas系列,如果 all 均为nan,或者如果 all 值是零或nan。
例如,我想将以下系列中的NAN填充为零。
0 0
1 0
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 NaN
但是,我会不是想要填写以下系列:
0 0
1 0
2 2
3 0
4 NaN
5 NaN
6 NaN
7 NaN
8 NaN
我正在查看文档,似乎我可以使用pandas.series.value_counts确保值只有0和nan,然后简单地调用fillna(0(。set(s.unique((。astype(str(。
考虑到熊猫的强大程度,似乎有更好的方法可以做到这一点。有人有什么建议干净,有效地执行此操作吗?
潜在的解决方案,感谢CᴏʟᴅSᴘᴇᴇᴅ
if s.dropna().eq(0).all():
s = s.fillna(0)
如果仅0
和isna
进行比较,则只有NaN
S和0
,然后fillna
:
if ((s == 0) | (s.isna())).all():
s = pd.Series(0, index=s.index)
或比较唯一值:
if pd.Series(s.unique()).fillna(0).eq(0).all():
s = pd.Series(0, index=s.index)
@cᴏʟᴅsᴘᴇᴇᴅ解决方案,谢谢 - 比较NaN
s与dropna
的比较系列:
if s.dropna().eq(0).all():
s = pd.Series(0, index=s.index)
问题的解决方案 - 需要转换为string
s,因为与NaN
S相比的问题:
if set(s.unique().astype(str)).issubset(['0.0','nan']):
s = pd.Series(0, index=s.index)
时间:
s = pd.Series(np.random.choice([0,np.nan], size=10000))
In [68]: %timeit ((s == 0) | (s.isna())).all()
The slowest run took 4.85 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 574 µs per loop
In [69]: %timeit pd.Series(s.unique()).fillna(0).eq(0).all()
1000 loops, best of 3: 587 µs per loop
In [70]: %timeit s.dropna().eq(0).all()
The slowest run took 4.65 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 774 µs per loop
In [71]: %timeit set(s.unique().astype(str)).issubset(['0.0','nan'])
The slowest run took 5.78 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 157 µs per loop
为空值创建掩码。检查掩码的长度是否等于串联的长度(在这种情况下,串联为所有空值或空值(,或者非曲子的长度是否等于零。如果是这样,请使用系列中的原始索引创建一个新的零值系列。
nulls = s.isnull()
if len(nulls) == len(s) or s[~nulls].eq(0).all():
s = pd.Series(0, index=s.index)
时间
%%timeit s_ = pd.concat([s] * 100000)
nulls = s_.isnull()
if len(nulls) == len(s_) or s_[~nulls].eq(0).all():
s_ = pd.Series(0, index=s_.index)
# 100 loops, best of 3: 2.33 ms per loop
# OP's solution:
%%timeit s_ = pd.concat([s] * 100000)
if s_.dropna().eq(0).all():
s_ = s_.fillna(0)
# 10 loops, best of 3: 19.7 ms per loop
# @Jezrael's fastest solution:
%%timeit s_ = pd.concat([s] * 100000)
if set(s_.unique().astype(str)).issubset(['0.0','nan']):
s_ = pd.Series(0, index=s_.index)
# 1000 loops, best of 3: 4.58 ms per loop