我是优化的新手,需要帮助改进此代码的运行时间。它完成了我的任务,但需要很长时间。有什么改进的建议吗?
这是代码:
def probabilistic_word_weighting(df, lookup):
# instantiate new place holder for class weights for each text sequence in the df
class_probabilities = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
for index, row in lookup.iterrows():
if row.word in df.words.split():
class_proba_ = row.class_proba.strip('][').split(', ')
class_proba_ = [float(i) for i in class_proba_]
class_probabilities = [a + b for a, b in zip(class_probabilities, class_proba_)]
return class_probabilities
两个输入df如下所示:
df
index word
1 i havent been back
2 but its
3 they used to get more closer
4 no way
5 when we have some type of a thing for
6 and she had gone to the doctor
7 suze
8 the only time the parents can call is
9 i didnt want to go on a cruise
10 people come aint got
查找
index word class_proba
6231 been [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.27899487]
8965 havent [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.27899487]
3270 derive [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.27899487]
7817 a [0.0, 0.0, 7.451379, 6.552, 0.0, 0.0, 0.0, 0.0]
3452 hello [0.0, 0.0, 0.0, 0.0, 0.000155327, 0.0, 0.0, 0.0]
5112 they [0.0, 0.0, 0.00032289312, 0.0, 0.0, 0.0, 0.0, 0.0]
1012 time [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.27899487]
7468 some [0.000193199, 0.0, 0.0, 0.000212947, 0.0, 0.0, 0.0, 0.0]
6428 people [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.27899487
5537 scuba [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.27899487
它所做的基本上是遍历查找中的每一行,其中包含一个单词及其相对类权重。如果在df.word中的任何文本序列中都找到了单词,那么lookup.word的class_probabilities将被添加到分配给df.word的每个序列的class_propabilities变量中。对于查找行的每次迭代,它将遍历df中的每一行。
如何才能更快地做到这一点?
IIUC,您在函数中使用df.apply
,但您可以这样做。我们的想法不是每次找到对应的单词时都对lookup
的行重新进行操作,而是只做一次,然后重塑df
,使其能够执行矢量化操作
1:用str.split
、stack
和to_frame
对df
的列字进行整形,得到每个字的新行:
s_df = df['words'].str.split(expand=True).stack().to_frame(name='split_word')
print (s_df.head(8))
split_word
0 0 i
1 havent
2 been
3 back
1 0 but
1 its
2 0 they
1 used
2:通过set_index
对单词列lookup
、str.strip
、str.split
和astype
进行整形,得到以单词为索引的数据帧,列中class_proba的每个值
split_lookup = lookup.set_index('word')['class_proba'].str.strip('][')
.str.split(', ', expand=True).astype(float)
print (split_lookup.head())
0 1 2 3 4 5 6 7
word
been 0.0 0.0 0.000000 0.000 0.000000 0.0 0.0 5.278995
havent 0.0 0.0 0.000000 0.000 0.000000 0.0 0.0 5.278995
derive 0.0 0.0 0.000000 0.000 0.000000 0.0 0.0 5.278995
a 0.0 0.0 7.451379 6.552 0.000000 0.0 0.0 0.000000
hello 0.0 0.0 0.000000 0.000 0.000155 0.0 0.0 0.000000
3:Merge
均为,drop
为非必要列,groupby
级别=0为df
和sum
的原始索引
df_proba = s_df.merge(split_lookup, how='left',
left_on='split_word', right_index=True)
.drop('split_word', axis=1)
.groupby(level=0).sum()
print (df_proba.head())
0 1 2 3 4 5 6 7
0 0.000000 0.0 0.000000 0.000000 0.0 0.0 0.0 10.55799
1 0.000000 0.0 0.000000 0.000000 0.0 0.0 0.0 0.00000
2 0.000000 0.0 0.000323 0.000000 0.0 0.0 0.0 0.00000
3 0.000000 0.0 0.000000 0.000000 0.0 0.0 0.0 0.00000
4 0.000193 0.0 7.451379 6.552213 0.0 0.0 0.0 0.00000
4:最后,转换为列表,并使用to_numpy
和tolist
:重新分配给原始df
df['class_proba'] = df_proba.to_numpy().tolist()
print (df.head())
words
0 i havent been back
1 but its
2 they used to get more closer
3 no way
4 when we have some type of a thing for
class_proba
0 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 10.55798974]
1 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
2 [0.0, 0.0, 0.00032289312, 0.0, 0.0, 0.0, 0.0, ...
3 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
4 [0.000193199, 0.0, 7.451379, 6.552212946999999...