准确性好,预测差



我正在尝试用CNN做一个多类分类项目。我的问题是获得良好的准确性,但不能很好地预测验证数据。我已经介绍了l2正则化但是它不能很好地泛化。也尝试了不同的l2正则化值(1e-3, 1e-4)这是我的准确度图和损失图。拓扑结构:

model = Sequential()
inputs = keras.Input(shape=(512, 512, 3), name="img")
x = Conv2D(32, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same')
x = BatchNormalization()(x)
x1 = Activation('relu')(x)
x2 = Conv2D(32, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same'
(x1)
x = BatchNormalization()(x2)
x = Activation('relu')(x2)
x3 = Conv2D(32, kernel_size=(3,3), strides=(1,1),  kernel_regularizer=l2(1e-5), padding='same')
(x)
x = BatchNormalization()(x3)
x = tensorflow.keras.layers.add([x, x1]) # ==> Shortcut
x = Activation('relu')(x)
x4 = Conv2D(64, kernel_size=(3,3), strides=(2,2), kernel_regularizer=l2(1e-5),padding='same')(x)
x = BatchNormalization()(x4)
x = Activation('relu')(x)
x5 = Conv2D(64, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same')
(x)
x = BatchNormalization()(x5)
x = Activation('relu')(x)
x6 = Conv2D(64, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same')
(x)
x = BatchNormalization()(x6)
x = tensorflow.keras.layers.add([x, x4]) # ==> Shortcut
x = Activation('relu')(x)
x7 = Conv2D(128, kernel_size=(3,3), strides=(2,2), kernel_regularizer=l2(1e-5), padding='same')
(x)
x = BatchNormalization()(x7)
x = Activation('relu')(x)
x8 = Conv2D(128, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x8)
x = Activation('relu')(x)
x9 = Conv2D(128, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x9)
x = tensorflow.keras.layers.add([x, x7]) #
x = Activation('relu')(x)
x10 = Conv2D(256, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x10)
x = Activation('relu')(x)
x11 = Conv2D(256, kernel_size=(3,3) , strides=(1,1), kernel_regularizer=l2(1e-5),padding='same') 
(x)
x = BatchNormalization()(x11)
x = Activation('relu')(x)
x12 = Conv2D(256, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x12)
x = tensorflow.keras.layers.add([x, x10]) #
x = Activation('relu')(x)
x13 = Conv2D(512, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x13)
x = Activation('relu')(x)
x14 = Conv2D(512, kernel_size=(3,3), strides=(1,1),kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x14)
x = Activation('relu')(x)
x15 = Conv2D(512, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same')(x)
x = BatchNormalization()(x15)
x = tensorflow.keras.layers.add([x, x13]) #
x = Activation('relu')(x)
x = Flatten()(Conv2D(1, kernel_size=1, strides=(1,1), kernel_regularizer=l2(1e-5), 
padding='same')(x))
x = layers.Dropout(0.3)(x)
outputs = Dense(4, activation ='softmax', kernel_initializer ='he_normal')(x) 
model = Model(inputs, outputs)
model.summary()

'我尝试了不同的滤镜,减少/增加图层。这个问题是因为过度拟合吗?有什么建议我可以改进,以便我得到更平滑的曲线和更好的预测吗?

  • 你也可以尝试在Conv2D图层中加入dropout,这应该有助于一些过拟合。
  • 减小alpha(优化器的学习率),使优化器不会超调。

应该有所帮助:)

最新更新