Python,多处理:如何优化代码?使代码更快



我使用Python。我有100个zip文件。每个zipfile包含100多个xmlfile。使用xmlfiles创建csvfiles。

from xml.etree.ElementTree import fromstring
import zipfile
from multiprocessing import Process
def parse_xml_for_csv1(data, writer1):
    root = fromstring(data)
    for node in root.iter('name'):
        writer1.writerow(node.get('value'))
def create_csv1():
    with open('output1.csv', 'w') as f1:
        writer1 = csv.writer(f1)
        for i in range(1, 100):
            z = zipfile.ZipFile('xml' + str(i) + '.zip')
            # z.namelist() contains more than 100 xml files
            for finfo in z.namelist():
                data = z.read(finfo) 
                parse_xml_for_csv1(data, writer1)

def create_csv2():
    with open('output2.csv', 'w') as f2:
        writer2 = csv.writer(f2)
        for i in range(1, 100): 
            ...

if __name__ == "__main__":
    p1 = Process(target=create_csv1)
    p2 = Process(target=create_csv2)
    p1.start()
    p2.start()
    p1.join()
    p2.join()

请告诉我,如何优化我的代码?让代码更快?

您只需要定义一个带参数的方法。将100个.zip文件的处理拆分到给定数量的线程或进程中。您添加的进程越多,使用的CPU就越多,也许您可以使用2个以上的进程,它会更快(在某些时候可能会因为磁盘I/O而出现瓶颈)

在下面的代码中,我可以更改为4或10个进程,不需要复制/粘贴代码。它可以处理不同的zip文件。

你的代码并行处理相同的100个文件两次:它甚至比没有多处理还要慢!

def create_csv(start_index,step):
    with open('output{0}.csv'.format(start_index//step), 'w') as f1:
        writer1 = csv.writer(f1)
        for i in range(start_index, start_index+step):
            z = zipfile.ZipFile('xml' + str(i) + '.zip')
            # z.namelist() contains more than 100 xml files
            for finfo in z.namelist():
                data = z.read(finfo)
                parse_xml_for_csv1(data, writer1)

if __name__ == "__main__":
    nb_files = 100
    nb_processes = 2   # raise to 4 or 8 depending on your machine
    step = nb_files//nb_processes
    lp = []
    for start_index in range(1,nb_files,step):
        p = Process(target=create_csv,args=[start_index,step])
        p.start()
        lp.append(p)
    for p in lp:
        p.join()

相关内容

  • 没有找到相关文章

最新更新