VGG16痘痘分级准确率极低



我正在尝试构建一个具有3个类(Level 0, Level 1, Level 2)的图像分级分类器。我在数据集中没有很多图像(每个类大约500个)。因此我使用了VGG16预训练模型。然而,准确率非常低(~0.33),并且随着训练几乎没有增加。

# Load VGG16 model
vgg_model = VGG16(weights="imagenet", 
include_top=False, 
input_tensor=Input(shape=(224,224,3)))
vgg_model.summary()
#make the model layers untrainable
for layer in vgg_model.layers:
layer.trainable = False
# Add output layer
output_model = vgg_model.output
output_model = layers.Dropout(0.25)(output_model)
output_model = layers.Flatten()(output_model)
output_model = layers.Dense(128,activation="relu")(output_model)
output_model = layers.Dropout(0.5)(output_model)
output_model = layers.Dense(3,activation="softmax")(output_model)
vggmodel = models.Model(inputs=vgg_model.input, outputs=output_model)
vggmodel.summary()

# Image augmentation on training set
train_datagen = ImageDataGenerator(
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.15,
rescale = 1./255,
)
# only rescale on validation set
validate_datagen = ImageDataGenerator(
rescale = 1./255
)
#set size of batches of data to 64
batch_size = 64
#resizing to 224x224
target_size = (224, 224)
# initialize the training data augmentation object
train_generator = train_datagen.flow_from_directory(directory=training_path, class_mode="categorical",
batch_size=batch_size, target_size=target_size, color_mode='rgb', shuffle= True)
validation_generator = validate_datagen.flow_from_directory(directory=validation_path, class_mode="categorical",
batch_size=batch_size, target_size=target_size, color_mode='rgb', shuffle= True)
vggmodel.compile(loss="categorical_crossentropy", optimizer=SGD(0.01),metrics=["accuracy"])
earlystopping = keras.callbacks.EarlyStopping(monitor ="val_loss",  
mode ="min", patience = 2,  
restore_best_weights = True) 
vggmodel.fit(train_generator, steps_per_epoch=int(1166/batch_size), epochs= 100,
validation_data=validation_generator, validation_steps=5, callbacks=[earlystopping])

我也试着建立自己的模型,但性能相似。

input_shape = (224,224,3)
cnn_model = models.Sequential()
cnn_model.add(layers.Conv2D(64, (3, 3), activation='relu',
input_shape=input_shape))
cnn_model.add(layers.Conv2D(64, (3, 3), activation='relu',
input_shape=input_shape))
cnn_model.add(layers.MaxPool2D((2, 2)))
cnn_model.add(layers.Conv2D(64, (3, 3), activation='relu',
input_shape=input_shape))
cnn_model.add(layers.MaxPool2D((2, 2)))
cnn_model.add(layers.Dropout(0.25))
cnn_model.add(layers.Flatten())
cnn_model.add(layers.Dense(128, activation='relu'))
cnn_model.add(layers.Dropout(0.5))
cnn_model.add(layers.Dense(3, activation='softmax'))
cnn_model.summary()
cnn_model.compile(loss="categorical_crossentropy", optimizer=SGD(0.01),metrics=["accuracy"])

earlystopping = keras.callbacks.EarlyStopping(monitor ="val_loss",  
mode ="min", patience = 2,  
restore_best_weights = True) 
cnn_model.fit(train_generator, steps_per_epoch=int(1248/batch_size), epochs= 15,
validation_data=validation_generator, validation_steps=2)

我做错了什么?对不起,我是初学者:/

  1. 使用max-pooling层,而不是flatten层
  2. 该用户必须将学习率降低到0.001,以获得可接受的结果。
  3. 使用较小的辍学率,例如0.2、0.3。
  4. 33%只是随机猜测。打印一些图片,看看它们的标签是否正确。
  5. 在一些示例图像上检查模型的输出。在某些列中是否太接近于1或0 ?它看起来只是随机的吗?

执行这些操作,您可能会捕获错误。

相关内容

  • 没有找到相关文章

最新更新