Python3中的多处理和死锁问题



我的多处理有问题,恐怕这是一个相当简单的修复,我只是没有正确地实现多处理。我一直在研究可能导致这个问题的事情,但我真正发现的是人们建议使用队列来防止这个问题,但这似乎并没有阻止它(再次,我可能只是不正确地实现队列)我已经在这几天了,我希望我能得到一些帮助。提前感谢!

import csv
import multiprocessing as mp
import os
import queue
import sys
import time
import connections
import packages
import profiles

def execute_extract(package, profiles, q):
# This is the package execution for the extract
# It fires fine and will print the starting message below
started_at = time.monotonic()
print(f"Starting {package.packageName}")
try:
oracle_connection = connections.getOracleConnection(profiles['oracle'], 1)
engine = connections.getSQLConnection(profiles['system'], 1)
path = os.path.join(os.getcwd(), 'csv_data', package.packageName + '.csv')
cursor = oracle_connection.cursor()
if os.path.exists(path):
os.remove(path)
f = open(path, 'w')
chunksize = 100000
offset = 0
row_total = 0
csv_writer = csv.writer(f, delimiter='^', lineterminator='n')
# I am having to do some data cleansing.  I know this is not the most efficient way to do this, but currently
# it is what I am limited too 
while True:
cursor.execute(package.query + f'rn OFFSET {offset} ROWSrn FETCH NEXT {chunksize} ROWS ONLY')
test = cursor.fetchone()
if test is None:
break
else:
while True:
row = cursor.fetchone()
if row is None:
break
else:
new_row = list(row)
new_row.append(package.sourceId[0])
new_row.append('')
i = 0
for item in new_row:
if type(item) == float:
new_row[i] = int(item)
elif type(item) == str:
new_row[i] = item.encode('ascii', 'replace')
i += 1
row = tuple(new_row)
csv_writer.writerow(row)
row_total += 1
offset += chunksize
f.close()
# I know that execution is at least reaching this point.  I can watch the CSV files grow as more and more 
# rows are added to the for all the packages What I never get are either the success message or error message
# below, and there are never any entries placed in the tables 
query = f"BULK INSERT {profiles['system'].database.split('_')[0]}_{profiles['system'].database.split('_')[1]}_test_{profiles['system'].database.split('_')[2]}.{package.destTable} FROM "{path}" WITH (FIELDTERMINATOR='^', ROWTERMINATOR='\n');"
engine.cursor().execute(query)
engine.commit()
end_time = time.monotonic() - started_at
print(
f"{package.packageName} has completed.  Total rows inserted: {row_total}.  Total execution time: {end_time} secondsn")
os.remove(path)
except Exception as e:
print(f'An error has occured for package {package.packageName}.rn {repr(e)}')
finally:
# Here is where I am trying to add an item to the queue so the get method in the main def will pick it up and
# remove it from the queue 
q.put(f'{package.packageName} has completed')
if oracle_connection:
oracle_connection.close()
if engine:
engine.cursor().close()
engine.close()

if __name__ == '__main__':
# Setting mp creation type
ctx = mp.get_context('spawn')
q = ctx.Queue()
# For the Etl I generate a list of class objects that hold relevant information profs contains a list of 
# connection objects (credentials, connection strings, etc) packages contains the information to run the extract 
# (destination tables, query string, package name for logging, etc) 
profs = profiles.get_conn_vars(sys.argv[1])
packages = packages.get_etl_packages(profs)
processes = []
# I'm trying to track both individual package execution time and overall time so I can get an estimate on rows 
# per second 
start_time = time.monotonic()
sqlConn = connections.getSQLConnection(profs['system'])
# Here I'm executing a SQL command to truncate all my staging tables to ensure they are empty and will not 
# generate any key violations 
sqlConn.execute(
f"USE[{profs['system'].database.split('_')[0]}_{profs['system'].database.split('_')[1]}_test_{profs['system'].database.split('_')[2]}]rnExec Sp_msforeachtable @command1='Truncate Table ?',@whereand='and Schema_Id=Schema_id(''my_schema'')'")
# Here is where I start generating a process per package to try and get all packages to run simultaneously
for package in packages:
p = ctx.Process(target=execute_extract, args=(package, profs, q,))
processes.append(p)
p.start()
# Here is my attempt at managing the queue.  This is a monstrosity of fixes I've tried to get this to work
results = []
while True:
try:
result = q.get(False, 0.01)
results.append(result)
except queue.Empty:
pass
allExited = True
for t in processes:
if t.exitcode is None:
allExited = False
break
if allExited & q.empty():
break
for p in processes:
p.join()
# Closing out the end time and writing the overall execution time in minutes.
end_time = time.monotonic() - start_time
print(f'Total execution time of {end_time / 60} minutes.')

我不能确定为什么您正在经历死锁(我完全不相信它与您的队列管理有关),但我可以肯定地说,如果您做以下两件事之一,您可以简化您的队列管理逻辑:

方法1

确保您的工作函数execute_extract即使在出现异常的情况下也会在结果队列上放置一些东西(我建议放置Exception对象本身)。然后,您的整个主进程循环,以while True:开始,试图获得的结果可以替换为:

results = [q.get() for _ in range(len(processes))]

可以保证队列中有固定数量的消息,与创建的进程数量相等。

方法2(更简单)

简单地颠倒等待子进程完成和处理结果队列的顺序。您不知道队列上将有多少消息,但是在所有进程返回之前,您不会处理队列。因此,无论队列上有多少消息,您都将获得所有消息。只检索它们,直到队列为空:

for p in processes:
p.join()
results = []
while not q.empty():
results.append(q.get())
在这一点上,我通常建议您使用多处理池类,如multiprocessing.Pool,它不需要显式队列来检索结果。但是进行这些更改中的任何一个(我建议使用方法2,因为我看不出它如何导致死锁,因为此时只有主进程在运行),看看您的问题是否会消失。但是,我是而不是,保证您的问题不在代码中的其他地方。虽然你的代码过于复杂和低效,但它并没有明显的"错误"。至少你会知道你的问题是否在别处。

我的问题是:使用ctx = mp.get_context('spawn')获得的上下文而不是仅仅调用multiprocessing模块本身的方法来做所有事情,它会给你带来什么?如果您的平台支持fork调用,这将是默认上下文,您会不想使用它吗?