我对Python和多处理有点陌生。我一直在尝试写一个程序,将excel文件合并成一个。我想使用多处理来读取所有文件,以便以后可以合并它们。我试过不同的方法,但都不起作用。欢迎提出任何建议。谢谢你!
这是我的完整代码没有多处理:
from concurrent.futures import process
import os
import queue
import time
import pandas as pd
import multiprocessing as mp
from joblib import Parallel, delayed
#convert CSV to excel
read_csv = pd.read_csv('DataSampleCollectionID.csv')
excel_writer = pd.ExcelWriter('Dataconverted_CSV_file.xlsx')
read_csv.to_excel(excel_writer, index=False)
excel_writer.save()
#Combining the files
filenames = ['Data/converted_CSV_file.xlsx', 'Data/LabData.xlsx']
frame = pd.DataFrame()
for i in filenames:
df = pd.read_excel(i)
df = df.loc[:,~df.columns.duplicated()].copy()
frame = pd.concat([frame, df], axis=1)
frame.to_excel('combinedData.xlsx', index=False)
我遇到了同样的挑战,下面的代码对我来说工作得很好:
import pandas as pd
import glob
from joblib import Parallel, delayed
files = glob.glob(my_path"*.xlsx")
def loop(file_number):
return pd.read_excel(file)
df = Parallel(n_jobs=-1, verbose=10)(delayed(loop)(file) for file in files)
df = pd.concat(df, ignore_index=True)
这段代码改编自这个优秀的帖子:https://towardsdatascience.com/read-excel-files-with-python-1000x-faster-407d07ad0ed8