因为这个爆炸梯度和爆炸损失发生在网络很大的时候,所以我就不把整个网络都贴出来了。但我已经尽了最大的努力,在过去的两周里,我深入研究了源代码的细节来监控一些权重,手动编写了更新步骤来监控损失、权重、更新、梯度和超参数,以比较内部状态。我想我在问这个问题之前做了一些功课。
问题是使用Keras API有两种训练方法,是model.fit()
,第二种是针对更复杂的训练和网络的更定制的方法,但是当我几乎保持彼此相同的时候,model.fit()
没有爆炸损失,但是自定义方法给出了爆炸。有趣的是,当我在一个小得多的网络下监控许多细节时,从两种方法来看,一切似乎都是一样的。
环境:
# tensorflow 1.14
import tensorflow as tf
from tensorflow.keras import backend as K
对于model.fit()
法:
# I skipped the details of the below two lines as I couldn't share the very details. but x is [10000, 32, 32, 3] image data, y is [10000, 10, 1] label. model is regular Keras model.
x_train, y_train, x_test, y_test = get_data()
model = get_keras_model()
loss_fn = tf.keras.losses.CategoricalCrossentropy()
sgd = tf.keras.optimizers.SGD(lr=.1, momentum=0.9, nesterov=True)
model.compile(loss=loss_fn, optimizer=sgd, metrics=['accuracy'])
history = model.fit(x_train, y_train, batch_size=128, epochs=100, validation_data=(x_test, y_test))
自定义方法:
x_train, y_train, x_test, y_test = get_data()
model = get_keras_model()
input = model.inputs[0]
y_true = tf.placeholder(dtype = tf.int32, shape = [None, 10])
y_pred = model.outputs[0]
loss_fn = tf.keras.losses.CategoricalCrossentropy()
loss = loss_fn(y_true, y_pred)
weights = model.trainable_weights
sgd = tf.keras.optimizers.SGD(lr=.1, momentum=0.9, nesterov=True)
training_updates = sgd.get_updates(loss, weights)
training_fn = K.function([y_true, input], [loss], training_updates)
num_train = 10000
steps_per_epoch = int(num_train / 128) # batch size 128
total_steps = steps_per_epoch * 100 # epoch 100
for step in total_steps:
idx = np.random.randint(0, 10000, 128)
input_img = x_train[idx]
ground_true = y_train[idx]
cur_loss = training_fn([ground_true, input_img])
所以简而言之,相同的模型,相同的损失函数,相同的优化器SGD,相同的图像馈送(我控制图像馈送顺序,尽管这里的代码是从训练数据中随机选择的)。在model.fit()
的内部过程中是否有什么东西可以防止损失或梯度爆炸?
在深入研究源代码后,我发现了梯度爆炸的原因,正确的代码(更改最小如下):
x_train, y_train, x_test, y_test = get_data()
model = get_keras_model()
input = model.inputs[0]
y_true = tf.placeholder(dtype = tf.int32, shape = [None, 10])
y_pred = model.outputs[0]
loss_fn = tf.keras.losses.CategoricalCrossentropy()
loss = loss_fn(y_true, y_pred)
weights = model.trainable_weights
sgd = tf.keras.optimizers.SGD(lr=.1, momentum=0.9, nesterov=True)
training_updates = sgd.get_updates(loss, weights)
# Correct:
training_fn = K.function([y_true, input, K.symbolic_learning_phase()], [loss], training_updates)
# Before:
# training_fn = K.function([y_true, input], [loss], training_updates)
num_train = 10000
steps_per_epoch = int(num_train / 128) # batch size 128
total_steps = steps_per_epoch * 100 # epoch 100
for step in total_steps:
idx = np.random.randint(0, 10000, 128)
input_img = x_train[idx]
ground_true = y_train[idx]
# Correct:
cur_loss = training_fn([ground_true, input_img, True])
# Before:
# cur_loss = training_fn([ground_true, input_img])
我对这个特定张量K.symbolic_learning_phase()
的理解是,它的默认值设置为False
(如果你在初始化时检查源代码),BatchNormalization
和Dropout
层等在训练阶段和测试阶段的行为不同。在这种情况下,BatchNormalization
层是梯度爆炸的原因(现在有些帖子提到他们用BatchNormalization
层得到梯度爆炸是有道理的),这是因为它的两个可训练权重batch_normalization_1/gamma:0
和batch_normalization_1/beta:0
依赖于这个张量,默认值False
它们没有学习,它们的权重在训练过程中很快就变成了nan
。
我注意到没有多少使用training_updates
方法的Keras代码真正把K.symbolic_learning_phase()
放在他们的代码中,然而,这是Keras的API在底层做的。