我使用Tensorflow 2.4.0,并尝试对学习率执行指数衰减,如下所示:
learning_rate_scheduler = tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate=0.1, decay_steps=1000, decay_rate=0.97, staircase=False)
并用这样的衰减方法启动我的优化器的学习率:
optimizer_to_use = Adam(learning_rate=learning_rate_scheduler)
该模型编译如下
model.compile(loss=metrics.contrastive_loss, optimizer=optimizer_to_use, metrics=[accuracy])
列车运行良好,直到第三个历元,显示以下错误:
File "train_contrastive_siamese_network_inception.py", line 163, in run_experiment
history = model.fit([pairTrain[:, 0], pairTrain[:, 1]], labelTrain[:], validation_data=([pairTest[:, 0], pairTest[:, 1]], labelTest[:]), batch_size=config.BATCH_SIZE, epochs=config.EPOCHS, callbacks=callbacks)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 1145, in fit
callbacks.on_epoch_end(epoch, epoch_logs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py", line 432, in on_epoch_end
callback.on_epoch_end(epoch, numpy_logs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py", line 2542, in on_epoch_end
old_lr = float(K.get_value(self.model.optimizer.lr))
TypeError: float() argument must be a string or a number, not 'ExponentialDecay'
我检查过这个问题甚至在官方的keras论坛上被提出,但即使在那里也没有成功。此外,文件明确指出:
LearningRateSchedule
实例可以作为任何优化器的learning_rate
参数传入。
可能是什么问题?
在model.compile()
中传递的参数并不完全相同。您在损失参数loss=metrics.contrastive_loss
中定义了度量,该参数应为tfa.loss.ContrastiveLoss((
如果您使用的是TensorFlow 2.4
,则需要安装特定版本的tensorflow_addons(在0.10 - 0.14
之间(来访问和使用tensorflow插件API-ContrastiveLoss
固定代码为:
model.compile(loss = tfa.losses.ContrastiveLoss(),
optimizer = optimizer_to_use,
metrics = ['accuracy'])
(此处附上复制的代码要点供您参考。(