Keras 模型为多标签图像分类提供非常低的训练和验证准确性



我的代码有 50 类图像,这些图像被传递到以下模型中。但是在我完成任何参数调整后,收到的精度几乎相同。训练和验证数据正确。

每个类别有 34 个训练图像和 6 个验证图像。

import keras
from keras.layers import Activation, Dense, Dropout, Conv2D, Flatten, MaxPooling2D, BatchNormalization
from keras.models import Sequential
from keras.optimizers import Adam, SGD
model = Sequential()
input_shape=(256, 256, 3)
adam = Adam(lr=0.000001,decay=0.001)
#sgd = SGD(lr=0.1, decay=1e-2, momentum=0.9)
chanDim=-1
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization(axis=chanDim))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(BatchNormalization(axis=chanDim))
# model.add(Conv2D(64, (3, 3)))
# model.add(Activation('relu'))
# model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
# model.add(Dense(300))
# model.add(Dropout(rate=0.5))
# model.add(Activation('relu'))
model.add(Dense(512))
model.add(Dropout(rate=0.5))
model.add(Activation('relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(50))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',optimizer=adam,metrics=['accuracy'])
import PIL
from PIL import Image
from keras.preprocessing.image import ImageDataGenerator
train_data_dir = 'C:/Users/abhir/Desktop/Difference4/train'
validation_data_dir = 'C:/Users/abhir/Desktop/Difference4/validate'
epochs = 10
# adding more parameters to training generator did not affect much too
train_datagen = ImageDataGenerator(rescale=1./255)
validate_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(train_data_dir, target_size=(256,256), batch_size=12, class_mode='categorical',seed=7)
validate_generator = validate_datagen.flow_from_directory(validation_data_dir, target_size=(256,256), batch_size=6, class_mode='categorical',seed=7)
#increasing the steps_per_epoch and batch size doesn't affect much as well..
model.fit_generator(train_generator, steps_per_epoch=100,epochs=5, validation_data=validate_generator, validation_steps=50)

结果如下: 100/100 [===

================================] - 337s 3s/步 - 损失: 5.7115 - 累计: 0.0308 - val_loss: 3.9834 - val_acc: 0.0367

您正在处理一个具有数千个参数的神经网络,每个类别在 10 个时期(340 张图像(中有 34 张图像。您可以使用经验法则,即您应该拥有比可训练参数更多的训练示例。可训练参数打印有model.summary()

因此,您可以尝试使用大约 1000 个 epoch,看看您的网络如何过度拟合训练数据,但最终数据不足。查看损失曲线,并检查张量板直方图,看看您的模型是否正在学习一些东西。

最新更新