我正在构建一个Flask MethodView驱动的API。对于特定终结点,我使用请求数据来启动可能长时间运行的命令。我没有等待命令完成,而是将其包装在multiprocessing.Process
中,调用 start,然后将 HTTP 202 以及可用于监视进程状态的 url 返回给用户。
class EndPointAPI(MethodView):
def __init__(self):
""" On init, filter requests missing JSON body."""
# Check for json payload
self.except = ["GET", "PUT", "DELETE" ]
if (request.method not in self.except) and not request.json:
abort(400)
def _long_running_function(self, json_data):
"""
In this function, I use the input JSON data
to write a script to the file system, then
use subprocess.run to execute it.
"""
return
def post(self):
""" """
# Get input data
json_data = request.json
# Kick off the long running function
p = Process(target=long_running_function, args=(json_data,))
p.start()
response = {
"result" : "job accepted",
"links" : {
"href" : "/monitor_job/",
}
}
return jsonify(response), 202
看起来在post
方法中启动的进程在完成后变成了僵尸,但我无法弄清楚如何在不阻止父方法执行的情况下正确跟踪和清理它们。我尝试按照 Python 中的建议实现监控线程,在不阻止父进程的情况下加入进程。据我了解,它建议运行一个单独的线程来监视 FIFO 队列,然后在返回父函数之前将进程句柄放入队列中。我尝试了一个实现(如下(,但看起来您无法将进程对象传递到 Thread 中,因为它包含受保护的AuthenticationString
属性。
Traceback (most recent call last):
| File "/opt/miniconda3/envs/m137p3/lib/python3.6/multiprocessing/queues.py", line 234, in _feed
| obj = _ForkingPickler.dumps(obj)
| File "/opt/miniconda3/envs/m137p3/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps
| cls(buf, protocol).dump(obj)
| File "/opt/miniconda3/envs/m137p3/lib/python3.6/multiprocessing/process.py", line 291, in __reduce__
| 'Pickling an AuthenticationString object is '
| TypeError: Pickling an AuthenticationString object is disallowed for security reasons
这是我在不阻塞父进程的情况下实现的 Python 加入进程。我不知道这是否有效,因为上述错误从一开始就关闭了整个系统。非常感谢有关如何在不阻止调用方法的情况下负责任地启动这些进程的任何想法或建议。
from threading import Thread
from multiprocessing import Queue, ...
class Joiner(Thread):
def __init__(self, q):
super().__init__()
self.__q = q
def run(self):
while True:
child = self.__q.get()
if child == None:
return
child.join()
class EndPointAPI(MethodView):
def __init__(self):
""" On init, filter requests missing JSON body."""
self._jobs = Queue()
self._babysitter = Joiner(self._jobs)
self._babysitter.start()
# Check for json payload
self.except = ["GET", "PUT", "DELETE" ]
if (request.method not in self.except) and not request.json:
abort(400)
def _long_running_function(self, json_data):
"""
In this function, I use the input JSON data
to write a script to the file system, then
use subprocess.run to execute it.
"""
return
def post(self):
""" """
# Get input data
json_data = request.json
# Kick off the long running function
p = Process(target=long_running_function, args=(json_data,))
p.start()
self._jobs.put(p)
response = {
"result" : "job accepted",
"links" : {
"href" : "/monitor_job/",
}
}
return jsonify(response), 202
你离得很近:)一切看起来都很好,除了一件事,您使用multiprocessing.Queue
来存储正在运行的进程,以便稍后将它们与Joiner
实例连接起来。从文档中您将了解以下内容
注意:将对象放入队列时,该对象将被酸洗并 后台线程稍后将酸洗数据刷新到底层 管。
也就是说,进程在放入队列时被序列化,这给出了以下错误
类型错误: 不允许对 安全原因
发生这种情况是因为每个进程都有唯一的身份验证密钥。此密钥是一个字节字符串,可以将其视为类型为multiprocessing.process.AuthenticationString
且无法腌制的密码。
解决方案很简单,只需使用queue.Queue
实例来存储长时间运行的进程。这是一个工作示例:
#!/usr/bin/env python3
import os
import time
from queue import Queue
from threading import Thread
from multiprocessing import Process
class Joiner(Thread):
def __init__(self):
super().__init__()
self.workers = Queue()
def run(self):
while True:
worker = self.workers.get()
if worker is None:
break
worker.join()
def do_work(t):
pid = os.getpid()
print('Process', pid, 'STARTED')
time.sleep(t)
print('Process', pid, 'FINISHED')
if __name__ == '__main__':
joiner = Joiner()
joiner.start()
for t in range(1, 6, 2):
p = Process(target=do_work, args=(t,))
p.start()
joiner.workers.put(p)
joiner.workers.put(None)
joiner.join()
输出:
Process 14498 STARTED
Process 14500 STARTED
Process 14499 STARTED
Process 14498 FINISHED
Process 14499 FINISHED
Process 14500 FINISHED