如何在Keras中提高CNN分类器的准确率?



我使用batch_size = 1024和epoch数= 100。我的分类器为我的模型提供了70%的最大准确率。我怎样才能提高准确性?

dr = 0.5 # dropout rate 
model = models.Sequential() 
model.add(Reshape(([1]+in_shp), input_shape=in_shp))
model.add(ZeroPadding2D((0, 2)))
model.add(Conv2D(256, (1, 3),padding='valid', activation="relu", name="conv1",  kernel_initializer='glorot_uniform',data_format="channels_first"))
model.add(Dropout(dr)) 
model.add(ZeroPadding2D((0, 2)))
model.add(Conv2D(80, (2, 3), padding="valid", activation="relu", name="conv2",  kernel_initializer='glorot_uniform',data_format="channels_first"))
model.add(Dropout(dr))
model.add(Flatten())
model.add(Dense(256, activation='relu',  kernel_initializer='he_normal', name="dense1")
model.add(Dropout(dr))
model.add(Dense( len(classes),  kernel_initializer='he_normal', name="dense2" ))
model.add(Activation('softmax'))
model.add(Reshape([len(classes)]))
model.compile(loss='categorical_crossentropy', optimizer='Adam',metrics=['accuracy'])
model.summary()

以下是建议修改的列表:

  • 你可以放弃ZeroPadding2D层,除非它是必须的图像你正在处理
  • 在使用多个Conv2D()层时,从小过滤器计数开始随着时间的推移而增加。即从32开始,然后64,然后128。单独的通过MaxPooling2D(),并根据您的需求使用适当的内核大小例子。
  • 在Falttening()之后再添加一个Dense()图层&到达最后之前层。

用some

检查修改后的代码
dr = 0.5 # dropout rate 
model = models.Sequential() 
model.add(Reshape(([1]+in_shp), input_shape=in_shp))
# You could drop the ZerpPadding2D as it adds rows and columns of zeros around image tensor
model.add(ZeroPadding2D((0, 2)))
# Start with lower filters count then increas as you go
model.add(Conv2D(32, (1, 3),padding='valid', activation="relu", name="conv1",  kernel_initializer='glorot_uniform',data_format="channels_first"))
model.add(MaxPooling2D(2))
model.add(Dropout(dr)) 
#model.add(ZeroPadding2D((0, 2))) # no need 
# Increase filters count
model.add(Conv2D(64, (2, 3), padding="valid", activation="relu", name="conv2",  kernel_initializer='glorot_uniform',data_format="channels_first"))
model.add(MaxPooling2D(2))
model.add(Conv2D(64, (2, 3), padding="valid", activation="relu", name="conv2",  kernel_initializer='glorot_uniform',data_format="channels_first"))
model.add(MaxPooling2D(2))
model.add(Dropout(dr))
model.add(Flatten())
model.add(Dense(256, activation='relu',  kernel_initializer='he_normal', name="dense1")
model.add(Dropout(dr))
# Add one more Dense 
model.add(Dense(256, activation='relu',  kernel_initializer='he_normal', name="dense1")
model.add(Dense( len(classes),  kernel_initializer='he_normal', name="dense2" ))
model.add(Activation('softmax'))
model.add(Reshape([len(classes)]))
model.compile(loss='categorical_crossentropy', optimizer='Adam',metrics=['accuracy'])
model.summary()

最后,很明显你在广泛地使用Dropout()。

最后2美分:

  • 你需要注意验证损失而不仅仅是准确性
  • 尝试迁移学习,如果你的数据样本有限。

相关内容

  • 没有找到相关文章

最新更新