如何使用tf.keras.callbacks获取非聚合批次统计信息



我正在尝试使用kerasmodel.fit()方法训练一个模型。这个方法返回一个历史对象,其中包含每个epoch的损失值,但是我希望每个批次都有损失值。

在网上,我发现了使用on_batch_end(self, logs={})方法的自定义回调类的建议。问题是,此方法只会传递聚合统计信息,这些统计信息会在每个epoch重置。我想要每个批次的单独统计信息。

https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/Callback#on_train_batch_end

您可以使用自定义训练循环轻松完成此操作,在该循环中,您只需附加一个列表,其中包含每个批次的损失值。

train_loss_per_train_batch.append(loss_value.numpy())

以下是如何做到这一切:

import tensorflow as tf
import tensorflow_datasets as tfds
ds = tfds.load('iris', split='train', as_supervised=True)
train = ds.take(125).shuffle(16).batch(4)
test = ds.skip(125).take(25).shuffle(16).batch(4)
model = tf.keras.Sequential([
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(3, activation='softmax')
])

loss_object = tf.losses.SparseCategoricalCrossentropy(from_logits=False)

def compute_loss(model, x, y, training):
out = model(x, training=training)
loss = loss_object(y_true=y, y_pred=out)
return loss

def get_grad(model, x, y):
with tf.GradientTape() as tape:
loss = compute_loss(model, x, y, training=True)
return loss, tape.gradient(loss, model.trainable_variables)

optimizer = tf.optimizers.Adam()
verbose = "Epoch {:2d} Loss: {:.3f} TLoss: {:.3f} Acc: {:.2%} TAcc: {:.2%}"
train_loss_per_train_batch = list()
for epoch in range(1, 25 + 1):
train_loss = tf.metrics.Mean()
train_acc = tf.metrics.SparseCategoricalAccuracy()
test_loss = tf.metrics.Mean()
test_acc = tf.metrics.SparseCategoricalAccuracy()
for x, y in train:
loss_value, grads = get_grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_loss.update_state(loss_value)
train_acc.update_state(y, model(x, training=True))
train_loss_per_train_batch.append(loss_value.numpy())
for x, y in test:
loss_value, _ = get_grad(model, x, y)
test_loss.update_state(loss_value)
test_acc.update_state(y, model(x, training=False))
print(verbose.format(epoch,
train_loss.result(),
test_loss.result(),
train_acc.result(),
test_acc.result()))

当前批次的损失可以根据提供的平均损失计算如下:

from tensorflow.keras.callbacks import Callback
class CustomCallback(Callback):
''' This callback converts the average loss (default behavior in TF>=2.2)
into the loss for only the current batch.
'''
def on_epoch_begin(self, epoch, logs={}):
self.previous_loss_sum = 0
def on_train_batch_end(self, batch, logs={}):
# calculate loss of current batch:
current_loss_sum =  (batch + 1) * logs['loss']
current_loss = current_loss_sum - self.previous_loss_sum
self.previous_loss_sum = current_loss_sum
# use current_loss:
# ...

此代码可以添加到任何需要当前批次的损失而不是平均损失的自定义回调中。

此外,如果您使用的是Tensorflow 1或Tensorflow 2版本<=2.1,那么不要在回调中包含此代码,因为在这些版本中,已经提供了当前损失,而不是平均损失。

相关内容

  • 没有找到相关文章