TensorFlow自定义训练步骤,具有不同的损失函数



背景

根据TensorFlow文档,可以使用以下执行自定义培训步骤

# Fake sample data for testing
x_batch_train = tf.zeros([32, 3, 1], dtype="float32")
y_batch_train = tf.zeros([32], dtype="float32")
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))

但是,如果我想使用不同的损失函数,如分类交叉熵,我需要argmax梯度带中创建的logits:

loss_fn = tf.keras.lossees.get("categorical_crossentropy")
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
prediction = tf.cast(tf.argmax(logits, axis=-1), y_batch_train.dtype)
loss_value = loss_fn(y_batch_train, prediction)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))

问题

这个问题是tf.argmax函数是不可微的,所以TensorFlow无法计算梯度,你会得到错误:

ValueError: No gradients provided for any variable: [...]

我的问题是:如果不改变损失函数,我如何使第二个例子起作用?

categorical_crossentropy希望您的标签是一个热编码的,所以您应该首先确保这一点。然后直接传递模型的结果,这个输出应该是每个类别一个概率。更多信息->https://www.tensorflow.org/api_docs/python/tf/keras/losses/CategoricalCrossentropy#standalone_usage

最新更新