为什么keras模型没有改进



所以这是我在keras中的代码,为了进行多类分类,但它总是给出相同的结果(acc=0.3212val_acc=0.327(,我试图用预训练的vgg只更改模型的代码,我得到了很好的结果。我还试着用pytorch对它进行编码,模型也得到了改进。所以我很纠结于这个模式,我不知道为什么它不起作用。附言:其他部分(数据生成器,…(都很好,因为我在vgg上试过了,效果很好。这是我的代码

import pandas as pd
import numpy as np
import os
import cv2
import matplotlib.pyplot as plt
import warnings
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from mpl_toolkits.axes_grid1 import ImageGrid

path = '/content/train/'
for folder in os.listdir(path):
print(len(os.listdir(path + folder))) 

from keras.layers import Activation, Dropout, Flatten, Dense
def createModel():
model = keras.models.Sequential()
model.add(keras.layers.Conv2D(64, (3, 3), input_shape=(224, 224, 3), padding='same'))
model.add(Activation('relu'))
model.add(keras.layers.Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(keras.layers.MaxPooling2D())

model.add(keras.layers.Conv2D(128, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(keras.layers.Conv2D(128, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(keras.layers.MaxPooling2D())


model.add(keras.layers.Conv2D(256, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(keras.layers.Conv2D(256, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(keras.layers.Conv2D(256, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(keras.layers.MaxPooling2D())


model.add(keras.layers.Conv2D(512, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(keras.layers.Conv2D(512, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(keras.layers.Conv2D(512, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(keras.layers.MaxPooling2D())


model.add(keras.layers.Conv2D(512, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(keras.layers.Conv2D(512, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(keras.layers.Conv2D(512, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(keras.layers.MaxPooling2D())

model.add(keras.layers.Flatten())

model.add(keras.layers.Dense(4096, activation='relu'))
model.add(keras.layers.Dense(4096, activation='relu'))
model.add(keras.layers.Dense(5, activation ='softmax'))
return model

#Augmentation des données 
train_image_generator = ImageDataGenerator(
rescale=1./255,
rotation_range=5,  # rotation de +5 deg
validation_split=0.1)  #10% de la base de données c'est données de validation
#Train & Validation Split
train_dataset = train_image_generator.flow_from_directory(batch_size=32,
directory='/content/train',
shuffle=True,
target_size=(224, 224),
subset="training",
class_mode='categorical')
validation_dataset = train_image_generator.flow_from_directory(batch_size=32,
directory='/content/train',
shuffle=True,
target_size=(224, 224),
subset="validation",
class_mode='categorical')
#Organize data for our predictions
test_image_generator = ImageDataGenerator(rescale=1./255)
test_dataset = test_image_generator.flow_from_directory(
directory='/content/test',
shuffle=False,
target_size=(224, 224),
class_mode=None)

#avec categorical cross entropy on va tester adam et RMSprop
model_adam = createModel()
model_RMSprop = createModel()
model_adam.compile(optimizer='adam', loss = 'categorical_crossentropy', metrics=['acc'])
model_RMSprop.compile(optimizer='rmsprop', loss = 'categorical_crossentropy',metrics=['acc'])
history_adam = model_adam.fit(train_dataset, epochs=60, validation_data=validation_dataset)

请尝试更改class_mode='spasse'loss = SparseCategoricalCrossentropy,因为您的标签不是one_hot编码的。此外,尝试用更少的层来减小模型大小,并添加丢弃层,这将在训练期间随机丢弃该层的许多输出特征。

看看这个:

model_adam.compile(optimizer='adam', loss = tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['acc'])
history_adam = model_adam.fit(train_dataset, epochs=20, validation_data=validation_dataset)

输出:

Epoch 15/20
104/104 [==============================] - 50s 477ms/step - loss: 0.2948 - acc: 0.8929 - val_loss: 1.2504 - val_acc: 0.6511
Epoch 16/20
104/104 [==============================] - 50s 477ms/step - loss: 0.2751 - acc: 0.8932 - val_loss: 1.4784 - val_acc: 0.6593
Epoch 17/20
104/104 [==============================] - 49s 474ms/step - loss: 0.2518 - acc: 0.9080 - val_loss: 1.4702 - val_acc: 0.6731
Epoch 18/20
104/104 [==============================] - 50s 483ms/step - loss: 0.2213 - acc: 0.9159 - val_loss: 1.6294 - val_acc: 0.6209
Epoch 19/20
104/104 [==============================] - 50s 476ms/step - loss: 0.2136 - acc: 0.9226 - val_loss: 1.6590 - val_acc: 0.6511
Epoch 20/20
104/104 [==============================] - 50s 477ms/step - loss: 0.2195 - acc: 0.9192 - val_loss: 1.9785 - val_acc: 0.6511

请在这里找到复制的代码要点供您参考。非常感谢。

问题是我的模型受到了消失梯度下降的影响,作为解决方案,我添加了批量归一化层

相关内容

  • 没有找到相关文章

最新更新