喀拉拉邦的自定义学习费率时间表



我正在实现一个基于前一个历元的准确性的衰减学习率。

捕获指标:

class CustomMetrics(tf.keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.metrics={'loss': [],'accuracy': [],'val_loss': [],'val_accuracy': []}
self.lr=[]

def on_epoch_end(self, epoch, logs={}):     
print(f"nEPOCH {epoch} Callng from METRICS CLASS")
self.metrics['loss'].append(logs.get('loss'))
self.metrics['accuracy'].append(logs.get('accuracy'))
self.metrics['val_loss'].append(logs.get('val_loss'))
self.metrics['val_accuracy'].append(logs.get('val_accuracy'))

自定义学习衰退:

from tensorflow.keras.callbacks import LearningRateScheduler
def changeLearningRate(epoch):
initial_learningrate=0.1
#print(f"EPOCH {epoch}, Calling from ChangeLearningRate:")
lr = 0.0
if epoch != 0:
if custom_metrics_dict.metrics['accuracy'][epoch] < custom_metrics_dict.metrics['accuracy'][epoch-1]:    
print(f"Accuracy @ epoch {epoch} is less than acuracy at epoch {epoch-1}")
print("[INFO] Decreasing Learning Rate.....")
lr = initial_learningrate*(0.1)
print(f"LR Changed to {lr}")  
return lr

模型准备:

input_layer = Input(shape=(2))
layer1 = Dense(32,activation='tanh',kernel_initializer=tf.random_uniform_initializer(0,1,seed=30))(input_layer)
output = Dense(2,activation='softmax',kernel_initializer=tf.random_uniform_initializer(0,1,seed=30))(layer1)

model = Model(inputs=input_layer,outputs=output)
custom_metrics_dict=CustomMetrics()
lrschedule = LearningRateScheduler(changeLearningRate, verbose=1) 
optimizer = tf.keras.optimizers.SGD(learning_rate=0.1,momentum=0.9)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train,Y_train,epochs=4, validation_data=(X_test,Y_test), batch_size=16 ,callbacks=[custom_metrics_dict,lrschedule])

index out of range error出错了。据我所知,每个epoch,LRScheduler代码被调用不止一次。我想不出一种方法来进行适当的函数调用。我下一步可以尝试什么?

调度程序函数的签名是def scheduler(epoch, lr):,这意味着您应该从该参数中获取lr。你不应该写initial_learningrate = 0.1,如果你这样做,你的lr不会衰减,当精度下降时,你总是会返回相同的值。对于您检查的超出范围的异常,epoch不为0,这意味着对于epoch=1,您正在检查custom_metrics_dict.metrics['accuracy'][epoch]custom_metrics_dict.metrics['accuracy'][epoch-1],但您只存储了一个精度值,epoch0没有精度值,因此此数组custom_metrics_dict.metrics['accuracy']中只有一个值

我已经用这个功能正确地运行了你的代码

from tensorflow.keras.callbacks import LearningRateScheduler
def changeLearningRate(epoch, lr):
print(f"EPOCH {epoch},  Calling from ChangeLearningRate: {custom_metrics_dict.metrics['accuracy']}")
if epoch > 1:
if custom_metrics_dict.metrics['accuracy'][epoch - 1] > custom_metrics_dict.metrics['accuracy'][epoch-2]:    
print(f"Accuracy @ epoch {epoch} is less than acuracy at epoch {epoch-1}")
print("[INFO] Decreasing Learning Rate.....")
lr = lr*(0.1)
print(f"LR Changed to {lr}")  
return lr

最新更新