Keras ModelCheckpoint不保存,但earlystop使用相同的监视器参数可以正常工作



我已经构建了一个模型,并使用自定义函数进行验证。问题是:我的自定义验证函数将验证准确性保存在日志字典中,但Keras ModelCheckpoint不知何故无法看到它。early stop工作正常

下面是验证类的代码:
class ValidateModel(keras.callbacks.Callback):

def __init__(self, validation_data, loss_fnc):
super().__init__()
self.validation_data = validation_data
self.loss_fnc = loss_fnc

def on_epoch_end(self, epoch, logs={}):

th = 0.5

features = self.validation_data[0]
y_true = self.validation_data[1].reshape((-1,1))     

y_pred = np.asarray(self.model.predict(features)).reshape((-1,1))

#Computing the validation loss.
y_true_tensor = K.constant(y_true)
y_pred_tensor = K.constant(y_pred)

val_loss = K.eval(self.loss_fnc(y_true_tensor, y_pred_tensor))

#Rounding the predicted values based on the threshold value.
#Values lesser than th are rounded to 0, while values greater than th are rounded to 1.
y_pred_rounded = y_pred / th
y_pred_rounded = np.clip(np.floor(y_pred_rounded).astype(int),0,1)
y_pred_rounded_tensor = K.constant(y_pred_rounded)

val_acc = accuracy_score(y_true, y_pred_rounded)

logs['val_loss'] = val_loss
logs['val_acc'] = val_acc

print(f'nval_loss: {val_loss} - val_acc: {val_acc}')

下面是我用来训练模型的函数:

def train_generator_model(model):
steps = int(train_df.shape[0] / TRAIN_BATCH_SIZE)
cb_validation = ValidateModel([validation_X, validation_y], iou)
cb_early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_acc', 
patience=3, 
mode='max', 
verbose = 1)
cb_model_checkpoint = tf.keras.callbacks.ModelCheckpoint('/kaggle/working/best_generator_model.hdf5',
monitor='val_acc',
save_best_only=True,
mode='max',
verbose=1)
history = model.fit(
x = train_datagen, 
epochs = 2, ##Setting to 2 to test.
callbacks = [cb_validation, cb_model_checkpoint, cb_early_stop], 
verbose = 1,
steps_per_epoch = steps)

#model = tf.keras.models.load_model('/kaggle/working/best_generator_model.hdf5', custom_objects = {'iou':iou})
#model.load_weights('/kaggle/working/best_generator_model.hdf5')

return history

如果我设置ModelCheckpoint参数&;save_best_model&;到,模型被完美保存。当训练结束后,我运行history.history,我可以看到val_loss正在被记录,如下所示:

{'loss': [0.13096405565738678, 0.11926634609699249],'binary_accuracy': [0.9692355990409851, 0.9716895818710327],'val_loss': [0.23041087, 0.18325138], 'val_acc': [0.9453247578938803,0.956172612508138]}

我正在使用Tensorflow 2.3.1和导入kerastensorflow.

任何帮助都是感激的。谢谢你!

我检查了Tensorflow代码,发现Tensorflow和Keras之间不兼容。tensorflow.keras.callbacks内部文件中,有以下代码:

from keras.utils import tf_utils

问题是在keras中没有tf_utils。utils(至少不在Keras 2.4.3(我正在使用)。奇怪的是,没有抛出异常。

修复# 1:在程序中添加以下代码:

class ModelCheckpoint_tweaked(tf.keras.callbacks.ModelCheckpoint):
def __init__(self,
filepath,
monitor='val_loss',
verbose=0,
save_best_only=False,
save_weights_only=False,
mode='auto',
save_freq='epoch',
options=None,
**kwargs):

#Change tf_utils source package.
from tensorflow.python.keras.utils import tf_utils

super(ModelCheckpoint_tweaked, self).__init__(filepath,
monitor,
verbose,
save_best_only,
save_weights_only,
mode,
save_freq,
options,
**kwargs)

然后使用这个新类作为ModelCheckpoint回调:

cb_model_checkpoint = ModelCheckpoint_tweaked(file_name,
monitor='val_acc',
save_best_only=True,
mode='max',
verbose=1)

修复# 2:

更新Tensorflow到版本2.4.0。如果您正在使用自定义回调来计算被监视的参数,请将以下行添加到自定义回调__init__()功能:

self._supports_tf_logs = True

如果你不添加这一行,日志将不会在回调之间持久化。

相关内容

  • 没有找到相关文章

最新更新