在多处理中,我想更新manager.dict()
,它正在更新。。。但是有些数据在更新时被跳过了?能做些什么?和这个差不多。。。
from multiprocessing import Process, Manager
manager = Manager()
a = manager.dict()
a['url_info'] = manager.list()
def parse_link(link):
# parse link, pared_info returns dict
pared_info = link_parser(link)
a['url_info'].append(pared_info)
# Links contains a lot of url that needs to be parsed.
links = ["https://url.com/1","https://url.com/2", "https://url.com/3"]
processes = []
for link in links:
p = Process(target=parse_link, args=link,))
p.start()
processes.append(p)
for process in processes:
process.join()
link_parser()
是一个返回字典的函数,字典中包含有关抓取/解析的网页的信息。
> print(list(a['url_info']))
> ['#info_1', '#info_3']
在这里,多处理程序跳过了更新列表(又名数组(中的#info_2
。请帮帮我
以下是一些代码,演示了您尝试做的事情的改进结构
显然,它没有link_parser((的详细信息,但你会明白的。
from concurrent.futures import ProcessPoolExecutor
from multiprocessing import Manager
from functools import partial
LINKS = ['abc', 'def', 'ghi']
KEY = 'url_info'
def parse_link(a, link):
a[KEY].append(link)
def main():
with Manager() as manager:
a = manager.dict()
a[KEY] = manager.list()
with ProcessPoolExecutor() as executor:
executor.map(partial(parse_link, a), LINKS)
print(a[KEY])
if __name__ == '__main__':
main()
输出:
['abc', 'def', 'ghi']