卷积自编码器的训练损失和验证损失并没有减少多少



为什么卷积自编码器的训练损失和验证损失没有减少?训练数据为10496x1024维数,CAE32x32大小的图像块在keras上进行训练。我已经尝试过l2regularization,但没有多大帮助。我正在训练20次。还有其他的选择吗?

输出:

时代1/20 10496/10496 [========] - 52个s -损失:0.4029 - val_loss:0.3821

时代2/20 10496/10496 [========] - 52个s -损失:0.3825 - val_loss:0.3784

Epoch 3/20 10496/10496 [=======] - 52s - loss: 0.3802 - val_loss:0.3772

Epoch 4/20 10496/10496 [=======] - 51s - loss: 0.3789 - val_loss:0.3757

Epoch 5/20 10496/10496 [=======] - 52s - loss: 0.3778 - val_loss:0.3752

Epoch 6/20 10496/10496 [=======] - 51s - loss: 0.3770 - val_loss:0.3743

Epoch 7/20 10496/10496 [=======] - 54s - loss: 0.3763 - val_loss:0.3744

Epoch 8/20 10496/10496 [=======] - 51s - loss: 0.3758 - val_loss:0.3735

Epoch 9/20 10496/10496 [=======] - 51s - loss: 0.3754 - val_loss:0.3731

Epoch 10/20 10496/10496 [=======] - 51s - loss: 0.3748 - val_loss:0.3739

Epoch 11/20 10496/10496 [=======] - 51s - loss: 0.3745 - val_loss:0.3729

Epoch 12/20 10496/10496 [=======] - 54s - loss: 0.3741 - val_loss:0.3723

Epoch 13/20 10496/10496 [=======] - 51s - loss: 0.3736 - val_loss:0.3718

Epoch 14/20 10496/10496 [=======] - 52s - loss: 0.3733 - val_loss:0.3716

Epoch 15/20 10496/10496 [=======] - 52s - loss: 0.3731 - val_loss:0.3717

Epoch 16/20 10496/10496 [=======] - 51s - loss: 0.3728 - val_loss:0.3712

Epoch 17/20 10496/10496 [=======] - 49s - loss: 0.3725 - val_loss:0.3709

Epoch 18/20 10496/10496 [=======] - 36s - loss: 0.3723 - val_loss:0.3710

Epoch 19/20 10496/10496 [=======] - 37s - loss: 0.3721 - val_loss:0.3708

Epoch 20/20 10496/10496 ========] - 37s - loss: 0.3720 - val_loss:0.3704

你的网络仍然在学习,并且在epoch 20时速度没有那么慢。如果你有足够的数据,你可以尝试更高的学习率和更早的停止方法。这种方法也可以应用于正则化方法和k-fold交叉验证。

相关内容

  • 没有找到相关文章

最新更新