根据迭代改变学习率



我正在尝试为Keras中的SGD编写一个自定义学习率调度器,它可以根据迭代更改学习率。然而,LearningRateScheduler回调只接受只使用epoch的函数。我的学习率函数如下:

学习率=base_Learning_rate x(1+gamma x迭代(^(-幂(

这可以通过定义自己的tf.keras.optimizers.schedules.LearningRateSchedule并将其传递给优化器来实现。

class Example(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, initial_learning_rate, gamma, power):
self.initial_learning_rate = initial_learning_rate
self.gamma = gamma
self.power = power
def __call__(self, step):
return self.initial_learning_rate * tf.pow((step*self.gamma+1),-self.power)
optimizer = tf.keras.optimizers.SGD(learning_rate=Example(0.1,0.001,2))

参考:https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/schedules/LearningRateSchedule

当你说"根据迭代"更改学习率你的意思是想在每批结束时更改它吗?如果是这样,您可以通过自定义回调来完成。我还没有测试过这个,但代码会像一样

class LRA(keras.callbacks.Callback):
def __init__(self,model, initial_learning_rate, gamma, power):
super(LRA, self).__init__()
self.initial_learning=initial_learning
self.gamma=gamma
self.power= power
self.model=model # model is your compiled model
def on_train_begin(self, logs=None):
tf.keras.backend.set_value(self.model.optimizer.lr, 
self.initial_learning_rate)
def on_train_batch_end(self, batch, logs=None):
lr=self.initial_learning_rate * tf.pow(((batch+1)*self.gamma+1),-self.power)
tf.keras.backend.set_value(self.model.optimizer.lr, lr)
# print('for ', batch, ' lr set to ', lr) remove comment if you want to see lr change

让我知道这是否有效,再说一遍,我还没有测试过

before you run model.fit include code
initial_learning_rate= .001  # set to desired value
gamma=   # set to desired value
power=   # set to desired value
callbacks=[LRA(model=model, initial_learning_rate=initial_learning_rate, gamma=gamma, power=power)                           



相关内容

  • 没有找到相关文章

最新更新