到目前为止,我只在最后return
结果的functions
上使用Multiprocessing
和Multi-threading
。我知道multiprocessing.Queue
和multiprocessing.Queue.get()
但我只是不明白我如何将它应用于data_loader。
我挣扎于以下任务:
def data_loader():
for _ in range(10**6):
#calculates for some seconds
yield result
for data in data_loader():
train_AI(data)
#Here an AI is being trained for another some seconds
所以我的问题是:在 GPU 上训练 AI 时,有没有简单的方法让我现有的data_loader
计算(预缓冲(它的下一个yield
?
或者我是否必须使用外部迭代器来完全重构它,该迭代器调用每次调用时returns
单个批处理的内部较小data_loader?
是的,你可以使用Python的队列:
from multiprocessing import Process, Queue
from time import sleep
FINISHED_LOADING_DATA = 'LAST ONE' # just make sure it's not something that can be returned by some_function()
def some_function():
print('getting data')
sleep(0.5)
return 'some_result'
def train_AI(x):
print('training AI')
sleep(2)
q = Queue()
final_results = []
def data_loader(q):
for _ in range(10):
result = some_function()
q.put(result)
q.put(FINISHED_LOADING_DATA)
def train_if_data_available():
while True:
data = q.get()
if data == FINISHED_LOADING_DATA:
return 'DONE'
train_AI(data)
t = Process(target=data_loader, args=(q,))
t.daemon = True
t.start()
train_if_data_available()