val_accuracy不变.总是相同的值



所以我的CNN模型的val_accuracy没有改变。实际上它总是相同的值。我有六个班级,每个班级有190个样本。总共大约。1250个样本。正常损失减少,但非常缓慢。会不会和样本太少有关?

结构如下:

X_train, X_test, y_train, y_test = train_test_split(final, y, test_size=0.25, random_state=42, stratify=y)
X_train = tf.expand_dims(X_train, axis=-1)
X_test = tf.expand_dims(X_test, axis=-1)
num_classes = 6
# convert class vectors to binary class matrices - this is for use in the
# categorical_crossentropy loss below
y_train = keras.utils.np_utils.to_categorical(y_train-1, num_classes)
y_test = keras.utils.np_utils.to_categorical(y_test-1, num_classes)
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(100, 400, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))

model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(num_classes, activation='softmax'))
opt = SGD(learning_rate=0.0000001)
model.compile(loss = "categorical_crossentropy", optimizer = opt, metrics=['accuracy'])
history = model.fit(X_train, y_train, epochs=200, validation_data=(X_test, y_test), batch_size=4, shuffle=True)

输出如下:

Epoch 1/200
213/213 [==============================] - 13s 63ms/step - loss: 31.1534 - accuracy: 0.1529 - val_loss: 49.1441 - val_accuracy: 0.1690
Epoch 2/200
213/213 [==============================] - 13s 62ms/step - loss: 28.5760 - accuracy: 0.1741 - val_loss: 25.3026 - val_accuracy: 0.1690
Epoch 30/200
213/213 [==============================] - 13s 62ms/step - loss: 13.4533 - accuracy: 0.1835 - val_loss: 24.9576 - val_accuracy: 0.1690
Epoch 31/200
213/213 [==============================] - 13s 62ms/step - loss: 13.2200 - accuracy: 0.1671 - val_loss: 27.3058 - val_accuracy: 0.1690
Epoch 105/200
213/213 [==============================] - 13s 62ms/step - loss: 5.3484 - accuracy: 0.1518 - val_loss: 5.4915 - val_accuracy: 0.1690
Epoch 106/200
213/213 [==============================] - 13s 62ms/step - loss: 5.1911 - accuracy: 0.1706 - val_loss: 7.9073 - val_accuracy: 0.1690

我尝试了以下方法:-把亚当优化器改成SGD

-降低/改变学习率

-减少批量大小

-Shuffe = True

还有别的主意吗?

根据我的经验,我认为每个类有6个类的190个样本应该会给出合理的结果,尽管更多的样本总是更好。根据分类模型的性质可能不够深,所以考虑增加一层或两个卷积和池。一是你的学习率非常低,只有0.0000001。首先尝试使用0.001的学习率。如果性能没有提高,则向模型添加层。除非你的图像非常大,否则我也会考虑将批处理大小增加到30,这对于像224 X 224这样的图像来说已经足够了。如果这些东西不工作你可以尝试传输模型学习使用下面的代码

model_name='EfficientNetB3'
base_model=tf.keras.applications.EfficientNetB3(include_top=False, weights="imagenet",input_shape=img_shape, pooling='max') 
x=base_model.output
x=keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x)
x = Dense(256, kernel_regularizer = regularizers.l2(l = 0.016),activity_regularizer=regularizers.l1(0.006),
bias_regularizer=regularizers.l1(0.006) ,activation='relu')(x)
x=Dropout(rate=.45, seed=123)(x)        
output=Dense(class_count, activation='softmax')(x)
model=Model(inputs=base_model.input, outputs=output)
model.compile(Adamax(lr=.001), loss='categorical_crossentropy', metrics=['accuracy']) 

注意:effentnet模型期望图像的像素值在0到255之间,因此您不必缩放图像。顺便说一下,您是否在代码中缩放图像,即img=img/255?

最新更新