如何优化我的python压缩代码的速度



我制作了一个压缩代码,并在10 KB的文本文件上进行了测试,这花了不少于3分钟的时间。然而,我用一个1MB的文件进行了测试,这是我老师分配的评估,它需要半个多小时。与我的同学相比,我的是不规则的长。它可能是我的电脑或我的代码,但我不知道。有人知道缩短代码速度的任何技巧或快捷方式吗?我的压缩代码如下,如果有任何更快的方法来进行循环等,请给我一个答案(:

(顺便说一句,我的代码确实有效,所以我不要求更正,只是要求增强或提示,谢谢!)

import re #used to enable functions(loops, etc.) to find patterns in text file
import os #used for anything referring to directories(files)
from collections import Counter #used to keep track on how many times values are added
size1 = os.path.getsize('file.txt') #find the size(in bytes) of your file,    INCLUDING SPACES
print('The size of your file is ', size1,)
words = re.findall('w+', open('file.txt').read()) 
wordcounts = Counter(words) #turns all words into array, even capitals 
common100 = [x for x, it in Counter(words).most_common(100)] #identifies the 200 most common words
keyword = []
kcount = []
z = dict(wordcounts)
for key, value in z.items():
    keyword.append(key) #adds each keyword to the array called keywords
    kcount.append(value)
characters =['$','#','@','!','%','^','&','*','(',')','~','-','/','{','[', ']', '+','=','}','|', '?','cb',
         'dc','fd','gf','hg','kj','mk','nm','pn','qp','rq','sr','ts','vt','wv','xw','yx','zy','bc',
         'cd','df','fg','gh','jk','km','mn','np','pq','qr','rs','st','tv','vw','wx','xy','yz','cbc',
         'dcd','fdf','gfg','hgh','kjk','mkm','nmn','pnp','qpq','rqr','srs','tst','vtv','wvw','xwx',
         'yxy','zyz','ccb','ddc','ffd','ggf','hhg','kkj','mmk','nnm','ppn','qqp','rrq','ssr','tts','vvt',
         'wwv','xxw','yyx''zzy','cbb','dcc','fdd','gff','hgg','kjj','mkk','nmm','pnn','qpp','rqq','srr',
         'tss','vtt','wvv','xww','yxx','zyy','bcb','cdc','dfd','fgf','ghg','jkj','kmk','mnm','npn','pqp',
         'qrq','rsr','sts','tvt','vwv','wxw','xyx','yzy','QRQ','RSR','STS','TVT','VWV','WXW','XYX','YZY',
        'DC','FD','GF','HG','KJ','MK','NM','PN','QP','RQ','SR','TS','VT','WV','XW','YX','ZY','BC',
         'CD','DF','FG','GH','JK','KM','MN','NP','PQ','QR','RS','ST','TV','VW','WX','XY','YZ','CBC',
         'DCD','FDF','GFG','HGH','KJK','MKM','NMN','PNP','QPQ','RQR','SRS','TST','VTV','WVW','XWX',
         'YXY','ZYZ','CCB','DDC','FFD','GGF','HHG','KKJ','MMK','NNM','PPN','QQP','RRQ','SSR','TTS','VVT',
         'WWV','XXW','YYX''ZZY','CBB','DCC','FDD','GFF','HGG','KJJ','MKK','NMM','PNN','QPP','RQQ','SRR',
         'TSS','VTT','WVV','XWW','YXX','ZYY','BCB','CDC','DFD','FGF','GHG','JKJ','KMK','MNM','NPN','PQP',] #characters which I can use
symbols_words = []
char = 0
for i in common100:
    symbols_words.append(characters[char]) #makes the array literally contain 0 values
        char = char + 1
print("Compression has now started")
f = 0
g = 0
no = 0
while no < 100:
    for i in common100:
        for w in words:
            if i == w and len(i)>1: #if the values in common200 are ACTUALLY in words
                place = words.index(i)#find exactly where the most common words are in the text
                symbols = symbols_words[common100.index(i)] #assigns one character with one common word
                words[place] = symbols # replaces the word with the symbol
                g = g + 1
    no = no + 1

string = words
stringMade = ' '.join(map(str, string))#makes the list into a string so you can put it into a text file
file = open("compression.txt", "w")
file.write(stringMade)#imports everything in the variable 'words' into the new file
file.close()
size2 = os.path.getsize('compression.txt')
no1 = int(size1)
no2 = int(size2)
print('Compression has finished.')
print('Your original file size has been compressed by', 100 - ((100/no1) * no2 ) ,'percent.'
  'The size of your file now is ', size2)

使用类似的东西

word_substitutes = dict(zip(common100, characters))

会给你一个dict,将常用单词映射到它们对应的符号。

然后你可以简单地迭代单词:

# Iterate over all the words
# Use enumerate because we're going to modify the word in-place in the words list
for word_idx, word in enumerate(words):
    # If the current word is in the `word_substitutes` dict, then we know its in the
    # 'common' words, and can be replaced by the symbol
    if word in word_substitutes:
        # Replaces the word in-place
        replacement_symbol = word_substitutes[word]
        words[word_idx] = replacement_symbol

这将提供更好的性能,因为用于公共单词符号映射的字典查找在时间上是对数的,而不是线性的。因此,总体复杂度将类似于O(Nlog(N)),而不是从2个嵌套循环中获得的O(N^3),其中包含.index()调用。

我看到的对性能不利的第一件事是:

for i in common100:
    for w in words:
        if i == w and len(i)>1:
            ...

你正在做的是查看单词w是否在你的常用100个单词列表中。然而,这个检查可以在O(1)时间内完成,方法是使用一个集合,而不是循环遍历每个单词的前100个单词。

common_words = set(common100)
for w in words:
    if w in common_words:
        ...

通常您会执行以下操作:

  • 衡量你的程序的每个"部分"需要多少时间。您可以使用探查器(例如,标准库中的这个探查器),或者简单地在代码中添加一些times.append(time.time.now)并计算差异。然后你就知道代码的哪一部分是慢的
  • 看看你是否可以改进慢部分的算法。格尼乔拉的回答显示了加快速度的一种可能性。while no<=100看起来可疑,也许可以改进。这一步需要了解您使用的算法。请注意为您的用例选择最佳的数据结构
  • 如果你不能使用更好的算法(因为你总是使用最好的方法来计算),你需要加快计算速度。numpy提供了数字方面的好处,使用cython,您基本上可以将python代码编译为C,而numba则使用LLVM进行编译

最新更新