如何加快数百万行的多个str.contain搜索速度?



我有一个商店名称的数据框,我正在尝试标准化。在这里测试的小样本:

import pandas as pd
df = pd.DataFrame({'store': pd.Series(['McDonalds', 'Lidls', 'Lidl New York 123', 'KFC', 'Lidi Berlin', 'Wallmart LA 90210', 'Aldi', 'London Lidl', 'Aldi627', 'mcdonaldsabc123', 'Mcdonald_s', 'McDonalds12345', 'McDonalds5555', 'McDonalds888', 'Aldi123', 'KFC-786', 'KFC-908', 'McDonalds511', 'GerALDInes Shop'],dtype='object',index=pd.RangeIndex(start=0, stop=19, step=1)), 'standard': pd.Series([pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan, pd.np.nan],dtype='float64',index=pd.RangeIndex(start=0, stop=19, step=1))}, index=pd.RangeIndex(start=0, stop=19, step=1))
store  standard
0           McDonalds       NaN
1               Lidls       NaN
2   Lidl New York 123       NaN
3                 KFC       NaN
4         Lidi Berlin       NaN
5   Wallmart LA 90210       NaN
6                Aldi       NaN
7         London Lidl       NaN
8             Aldi627       NaN
9     mcdonaldsabc123       NaN
10         Mcdonald_s       NaN
11     McDonalds12345       NaN
12      McDonalds5555       NaN
13       McDonalds888       NaN
14            Aldi123       NaN
15            KFC-786       NaN
16            KFC-908       NaN
17       McDonalds511       NaN
18    GerALDInes Shop       NaN

我设置了一个正则表达式字典来搜索字符串,并将商店名称的标准化版本插入到列standard中。这适用于此小数据帧:

# set up the dictionary
regex_dict = {
"McDonalds": r'(mcdonalds|mcdonald_s)',
"Lidl" : r'(lidl|lidi)',
"Wallmart":r'wallmart',
"KFC": r'KFC',
"Aldi":r'(baldib|baldid+)'
}
# loop through dictionary, using str.replace 
for regname, regex_formula in regex_dict.items(): 
df.loc[df['store'].str.contains(regex_formula,na=False,flags=re.I), 'standard'] = regname
print(df)
store   standard
0           McDonalds  McDonalds
1               Lidls       Lidl
2   Lidl New York 123       Lidl
3                 KFC        KFC
4         Lidi Berlin       Lidl
5   Wallmart LA 90210   Wallmart
6                Aldi       Aldi
7         London Lidl       Lidl
8             Aldi627       Aldi
9     mcdonaldsabc123  McDonalds
10         Mcdonald_s  McDonalds
11     McDonalds12345  McDonalds
12      McDonalds5555  McDonalds
13       McDonalds888  McDonalds
14            Aldi123       Aldi
15            KFC-786        KFC
16            KFC-908        KFC
17       McDonalds511  McDonalds
18    GerALDInes Shop        NaN

问题是我有大约六百万行要标准化,正则表达式字典比这里显示的要大得多。(许多不同的商店名称,有些拼写错误等(

我想做的是在每个循环中,只对尚未标准化的行使用str.contains,而忽略已标准化的行。这个想法是减少每个循环的搜索空间,从而减少整体处理时间。

我已经测试了按standard列的索引,仅在standardNan的行上执行str.contains,但它不会导致任何真正的加速。在应用str.contains之前,仍然需要时间来弄清楚哪些行Nan

以下是我试图减少每个循环的处理时间的方法:

for regname, regex_formula in regex_dict.items(): 
# only apply str.contains to rows where standard == NAN
df.loc[df['standard'].isnull() & df['store'].str.contains(regex_formula,na=False,flags=re.I), 'standard'] = regname

这有效..但是在我的600万行上使用它对速度没有真正的影响。

是否有可能在 600 万行的数据帧上加快速度?

另一种方法是先提取组,然后像下面一样替换,您的循环方法仍然更好。

我们需要稍微改变一下regex_dict,

regex_dict = {
r'mcdonalds|mcdonald_s':"McDonalds",
r'lidl|lidi':"Lidl",
r'wallmart': "Wallmart",
r'kfc':"KFC" ,
r'aldi|aldi':"Aldi"
}
df.str.extract(r'('+ '|'.join(regex_dict.keys())+')',expand=False).replace(regex_dict,regex=True)
0    McDonalds
1         Lidl
2         Lidl
3          KFC
4         Lidl

我设法用这个减少了 40% 所需的时间。我能做的最好的事情

我创建了一个名为fixed_df的空数据帧来追加新的标准化行,然后在每个循环结束时删除原始数据帧中的相同行。随着每个商店的标准化,每个循环的搜索空间都会减少,并且fixed_df的大小随着每个循环的增加而增加。最后,fixed_df应该拥有所有原始行,现在已标准化,原始 df 应为空。

# create empty df to store new results
fixed_df = pd.DataFrame()
# loop through dictionary
for regname, regex_formula in regex_dict.items(): 
# search for regex formula, add standardized name into standard column
df.loc[df['term_location'].str.contains(regex_formula,na=False,flags=re.I), 'standard'] = regname
# get index of where names were fixed
ind = df[df['standard']==regname].index
# append fixed data to new df
fixed_df.append(df[df.index.isin(ind)].copy())
# remove processed stuff from original df
df = df[~df.index.isin(ind)].copy()

最新更新