我刚开始学习深度学习,还有很多东西要学。我的第一个项目是下面的,它试图对5个不同种类的蛇进行分类,总共使用了5个类别的17389张图像(每个类别约3500张)。我用了一个很小的模型;我相信再小一点也行。但我的准确率从来没有超过30(最高50)。损失是1。我对这个完全陌生,只是了解不同的激活和层是如何工作的。我试过调整这个模型,但并没有使它有所改善。没有检查测试集的数据,因为结果很糟糕。我已经做了所有我知道的基本预处理。任何形式的帮助都将是非常感激的:)
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1. / 255,
data_format="channels_last",
validation_split=0.25,)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical',
subset='training',
shuffle=True)
validation_generator = train_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical',
subset='validation',
shuffle=True)
from keras.utils.np_utils import to_categorical
train_labels = train_generator.classes
num_classes = len(train_generator.class_indices)
train_labels = to_categorical(train_labels, num_classes=num_classes)
print(train_labels)
# Creating a Sequential model
model= Sequential()
model.add(Conv2D(kernel_size=(3,3), filters=32, activation='tanh', input_shape=(150,150,3,)))
model.add(Conv2D(filters=30,kernel_size = (3,3),activation='tanh'))
model.add(MaxPool2D(2,2))
model.add(Conv2D(filters=30,kernel_size = (3,3),activation='tanh'))
model.add(MaxPool2D(2,2))
model.add(Conv2D(filters=30,kernel_size = (3,3),activation='tanh'))
model.add(Flatten())
model.add(Dense(20,activation='relu'))
model.add(Dense(15,activation='relu'))
model.add(Dense(5,activation = 'softmax'))
model.compile(
loss='categorical_crossentropy',
metrics=['acc'],
optimizer='adam'
)
history = model.fit(
train_generator,
steps_per_epoch=train_generator.samples//train_generator.batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=validation_generator.samples//validation_generator.batch_size)
你能告诉我我哪里出错了吗?我想到处都是。
当您传递class_mode='categorical'
时,您的标签是一次热编码,您不需要第二次train_labels = to_categorical(train_labels, num_classes=num_classes)
。有关详细信息,请参阅文档。
您的2DConv也有tanh
激活,最好使用relu
model.add(Conv2D(filters=30,kernel_size = (3,3),activation='relu'))
当你深入神经网络时,试着增加你的过滤器,像这样:
model.add(Conv2D(filters=64,kernel_size = (3,3),activation='relu'))
model.add(MaxPool2D(2,2))
model.add(Conv2D(filters=128,kernel_size = (3,3),activation='relu'))
model.add(MaxPool2D(2,2))
model.add(Conv2D(filters=256,kernel_size = (3,3),activation='relu'))
压平后,使用单位更多的Dense layer:
model.add(Dense(128,activation='relu'))
model.add(Dense(5,activation = 'softmax'))
您也不需要显式设置steps_per_epoch
。validation_generator.samples//validation_generator.batch_size
在本例中相当于len(validation_generator)
。