Python改进列表比较速度



我需要帮助提高脚本速度,起初它工作得很好,但脚本运行得越慢,我总是不得不重新启动它以获得全速。所以我真的需要想个办法加快速度。

脚本如何工作:

  1. 打开已保存的。txt文件skus_local (~100-400k行)+ keywords_local(~ 200万行)
  2. 它获取url,分类(~10k行文件),并循环每个url,分类步骤3,5,6,所以这是重复的过程。
  3. 脚本将抓取2个列表new_skus(400个值)+ new_keywords (1k最大值)
  4. 脚本将检查new_skusold_skus和一个新的upload_skus创建唯一的值。
  5. new_keywords + old_keywords
  6. 相同
  7. 脚本将追加到文件upload_skus and upload_keywords '

我可以看到步骤4,5,(6可能)当比较导致速度问题

        try:
            f = open(settings['skus_local'],"r")
            old_skus=f.read().split("n")[:-1]
            f.close()
            del f
        except:
            old_skus=[]
            f = open(settings['skus_local'],"w")
            f.close()
            del f
        skus_local_file = open(settings['skus_local'],"a")
        try:
            f = open(settings['keywords_local'], "r")
            old_keywords=f.read().split("n")[:-1]
            f.close()
            del f
        except:
            old_keywords=[]
            f = open(settings['keywords_local'], "w")
            f.close()
            del f
        keywords_local_file = open(settings['keywords_local'],"a")

        csv_reader_counter = 0
        for category, url in csv.reader(fp):
            if not (csv_reader_counter == fp_counter):
              csv_reader_counter = csv_reader_counter + 1
              continue
            print url,category
            new_skus, new_keywords = ScraperJP.main(url)
            upload_skus=[]
            for sku in new_skus:
                if sku not in old_skus:
                    upload_skus.append(sku)
            del new_skus
            if upload_skus!=[]:
                insert_products.main(settings['admin_url'],settings['username'],settings['password'],upload_skus,category)
                for sku in upload_skus:
                    skus_local_file.write(sku+"n")
                    old_skus.append(sku)
                skus_local_file.flush()
                del upload_skus
            upload_keywords=[]
            for urls in new_keywords:
                if urls not in old_keywords:
                    upload_keywords.append(urls)
            del new_keywords
            if upload_keywords!=[]:
                for keyword in upload_keywords:
                    keywords_local_file.write(keyword+"n")
                    old_keywords.append(keyword)
                keywords_local_file.flush()
            del upload_keywords
            csv_reader_counter = csv_reader_counter + 1
            fp_counter = fp_counter + 1
            fl = open('lineno.txt',"w")
            fl.write(str(fp_counter))
            fl.close()
            gc.collect()
        os.remove('lineno.txt')
        skus_local_file.close()
        keywords_local_file.close()
        fp.close()
        del skus_local_file
        del keywords_local_file
        del fp
if __name__=='__main__':
    main()

以集的形式存储信息。

检查新的内容,你只需要new_skus - old_skus

而不是像

这样的行
for sku in new_skus:
    if sku not in old_skus:
       upload_skus.append(sku)

您可以使用new_skus.difference(old_skus),它将提供new_skus而不是old_skus中的元素。

如果你想保存这个集合,你可以使用pickle。

import pickle
s = {1,2,3,4}
with open("s.pick","wb") as f: # pickle it to file
    pickle.dump(s,f)
with open("s.pick","rb") as f1:
    un_p = pickle.loads(f1.read()) # unpickle and use
print un_p
set([1, 2, 3, 4])

你也可以把对象附加到一个文件中:

s2 = {4,5,6,7}
import pickle
with open("s.pick","ab") as f:
    pickle.dump(s2,f)

with open("s.pick","rb") as f1:
    s1 = pickle.load(f1)
    s2 = pickle.load(f1)
    print s1,s2
set([1, 2, 3, 4]) set([4, 5, 6, 7])

使用集合的例子:

s1={1, 2, 3, 4}
s2={4, 5, 6, 7}
s3={8,9,10,11}
print s1.difference(s2)
print s1.union(s2,s3)
set([1, 2, 3]) # in set 1 bit not in set 2
set([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) # all elements in s1,s2 and s3

您可以使用:

将一个集合的内容添加到另一个集合
s1.update(s2) #  add contents of s2 to s1
print "updated s1 with contents of s2", s1
updated s1 with contents of s2 set([1, 2, 3, 4, 5, 6, 7])

最新更新