为什么我的自动编码器在训练过程中根本没有丢失



我将按照本教程创建一个基于Keras的自动编码器,但使用我自己的数据。该数据集包括约2万张训练图像和约4万张验证图像。它们都非常相似,都显示了相同的对象。我没有修改教程中的Keras模型布局,只是更改了输入大小,因为我使用了300x300个图像。所以我的模型看起来是这样的:

Model: "autoencoder"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
input_1 (InputLayer)         [(None, 300, 300, 1)]     0
_________________________________________________________________
encoder (Functional)         (None, 16)                5779216
_________________________________________________________________
decoder (Functional)         (None, 300, 300, 1)       6176065
=================================================================
Total params: 11,955,281
Trainable params: 11,954,897
Non-trainable params: 384
_________________________________________________________________
Model: "encoder"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
input_1 (InputLayer)         [(None, 300, 300, 1)]     0
_________________________________________________________________
conv2d (Conv2D)              (None, 150, 150, 32)      320
_________________________________________________________________
leaky_re_lu (LeakyReLU)      (None, 150, 150, 32)      0
_________________________________________________________________
batch_normalization (BatchNo (None, 150, 150, 32)      128
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 75, 75, 64)        18496
_________________________________________________________________
leaky_re_lu_1 (LeakyReLU)    (None, 75, 75, 64)        0
_________________________________________________________________
batch_normalization_1 (Batch (None, 75, 75, 64)        256
_________________________________________________________________
flatten (Flatten)            (None, 360000)            0
_________________________________________________________________
dense (Dense)                (None, 16)                5760016
=================================================================
Total params: 5,779,216
Trainable params: 5,779,024
Non-trainable params: 192
_________________________________________________________________
Model: "decoder"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
input_2 (InputLayer)         [(None, 16)]              0
_________________________________________________________________
dense_1 (Dense)              (None, 360000)            6120000
_________________________________________________________________
reshape (Reshape)            (None, 75, 75, 64)        0
_________________________________________________________________
conv2d_transpose (Conv2DTran (None, 150, 150, 64)      36928
_________________________________________________________________
leaky_re_lu_2 (LeakyReLU)    (None, 150, 150, 64)      0
_________________________________________________________________
batch_normalization_2 (Batch (None, 150, 150, 64)      256
_________________________________________________________________
conv2d_transpose_1 (Conv2DTr (None, 300, 300, 32)      18464
_________________________________________________________________
leaky_re_lu_3 (LeakyReLU)    (None, 300, 300, 32)      0
_________________________________________________________________
batch_normalization_3 (Batch (None, 300, 300, 32)      128
_________________________________________________________________
conv2d_transpose_2 (Conv2DTr (None, 300, 300, 1)       289
_________________________________________________________________
activation (Activation)      (None, 300, 300, 1)       0
=================================================================
Total params: 6,176,065
Trainable params: 6,175,873
Non-trainable params: 192

然后我这样初始化我的模型:

IMGSIZE = 300
EPOCHS = 20
LR = 0.0001
(encoder, decoder, autoencoder) = ConvAutoencoder.build(IMGSIZE, IMGSIZE, 1)
sched = ExponentialDecay(initial_learning_rate=LR, decay_steps=EPOCHS, decay_rate=LR / EPOCHS)
autoencoder.compile(loss="mean_squared_error", optimizer=Adam(learning_rate=sched))

然后我这样训练我的模型:

image_generator = ImageDataGenerator(rescale=1.0 / 255)
train_gen = image_generator.flow_from_directory(
os.path.join(args.images, "training"),
class_mode="input",
color_mode="grayscale",
target_size=(IMGSIZE, IMGSIZE),
batch_size=BS,
)
val_gen = image_generator.flow_from_directory(
os.path.join(args.images, "validation"),
class_mode="input",
color_mode="grayscale",
target_size=(IMGSIZE, IMGSIZE),
batch_size=BS,
)
hist = autoencoder.fit(train_gen, validation_data=val_gen, epochs=EPOCHS, batch_size=BS)

我的批量大小BS是32,我的初始Adam学习率是0.001(但我也尝试过将0.1降低到0.0001(。我还尝试过将潜在维度增加到1024,但这也不能解决我的问题。

现在,在训练期间,损失在第一个时期从大约0.5下降到大约0.2,然后从第二个时期开始,损失保持在非常相同的值,例如0.1989,然后它保持在那里";"永远";,不管我训练了多少个时期和/或我使用的初始学习率。

你知道这里可能出了什么问题吗?

可能是tf.keras.optimizers.schedules.ExponentialDecay中的decay_rate参数比你想象的更快地降低了你的学习率,实际上使你的学习效率为零。

相关内容

  • 没有找到相关文章

最新更新