Python 内存队列和线程错误



我目前正在编写一个脚本,可以从一个大文件中读取reddit评论(压缩5 gig,正在读取~30 gig的数据)。我的脚本读取注释,检查一些文本,解析它们,并将它们发送到队列函数(在单独的线程中运行)。无论我做什么,我总是在特定迭代中得到一个 MemoryError(如果它最重要的话,8162735数字)。而且我似乎无法处理错误,Windows只是在遇到python时不断关闭python。这是我的脚本:

import ujson
from tqdm import tqdm
import bz2
import json
import threading
import spacy
import Queue
import time
nlp = spacy.load('en')
def iter_comments(loc):
    with bz2.BZ2File(loc) as file_:
        for i, line in (enumerate(file_)):
            yield ujson.loads(line)['body']
objects = iter_comments('RC_2015-01.bz2')
q = Queue.Queue()
f = open("reddit_dump.bin", 'wb')
def worker():
    while True:
        item = q.get()
        f.write(item)
        q.task_done()
for i in range(0, 2):
    t = threading.Thread(target=worker)
    t.daemon = True
    t.start()
def finish_parse(comment):
    global q
    try:
        comment_parse = nlp(unicode(comment))
        comment_bytes = comment_parse.to_bytes()
        q.put(comment_bytes)
    except MemoryError:
        print "MemoryError with comment {0}, waiting for Queue to empty".format(comment)
        time.sleep(2)
    except AssertionError:
        print "AssertionError with comment {0}, skipping".format(comment)
for comment in tqdm(objects):
    comment = str(comment.encode('ascii', 'ignore'))
    if ">" in comment:
        c_parse_thread = threading.Thread(target=finish_parse, args=(comment,))
        c_parse_thread.start()          
q.join()
f.close()

有人知道我做错了什么吗?

看起来它不在您的代码中,但可能在数据中。您是否尝试跳过该迭代?

x = 0
for comment in tqdm(objects):
    x += 1
    if x != 8162735
        comment = str(comment.encode('ascii', 'ignore'))
        if ">" in comment:
            c_parse_thread = threading.Thread(target=finish_parse, args=(comment,))
            c_parse_thread.start()  

最新更新