我在Linux操作系统上训练了我的模型,这样我就可以使用MirroredStrategy()
并在2个GPU上训练。训练在纪元610停止。我想恢复训练,但当我加载模型并对其进行评估时,内核就会失效。我在用Jupyter笔记本。如果我减少我的训练数据集,代码将运行,但它将只在1个GPU上运行。我的分发策略是保存在我正在加载的模型中,还是必须再次包含它?
更新
我已经尝试包括MirroredStrategy()
:
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
new_model = load_model('\models\model_0610.h5',
custom_objects = {'dice_coef_loss': dice_coef_loss,
'dice_coef': dice_coef}, compile = True)
new_model.evaluate(train_x, train_y, batch_size = 2,verbose=1)
新错误
包含MirroredStrategy()
:时出错
ValueError: 'handle' is not available outside the replica context or a 'tf.distribute.Stragety.update()' call.
源代码:
smooth = 1
def dice_coef(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def dice_coef_loss(y_true, y_pred):
return (1. - dice_coef(y_true, y_pred))
new_model = load_model('\models\model_0610.h5',
custom_objects = {'dice_coef_loss': dice_coef_loss, 'dice_coef': dice_coef}, compile = True)
new_model.evaluate(train_x, train_y, batch_size = 2,verbose=1)
observe_var = 'dice_coef'
strategy = 'max' # greater dice_coef is better
model_resume_dir = '//models_resume//'
model_checkpoint = ModelCheckpoint(model_resume_dir + 'resume_{epoch:04}.h5',
monitor=observe_var, mode='auto', save_weights_only=False,
save_best_only=False, period = 2)
new_model.fit(train_x, train_y, batch_size = 2, epochs = 5000, verbose=1, shuffle = True,
validation_split = .15, callbacks = [model_checkpoint])
new_model.save(model_resume_dir + 'final_resume.h5')
new_model.evaluate()
和compile = True
在加载模型时导致问题。我设置了compile = False
,并从原始脚本中添加了一个编译行。
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
new_model = load_model('\models\model_0610.h5',
custom_objects = {'dice_coef_loss': dice_coef_loss,
'dice_coef': dice_coef}, compile = False)
new_model.compile(optimizer = Adam(learning_rate = 1e-4, loss = dice_coef_loss,
metrics = [dice_coef])