在keras DNN模型中,高训练精度但输入总是被分类到同一类



我在3个类上使用Xception架构和数据扩充。我的原始数据集在三个类中的每一个类中都有3个图像,组织在一个目录中。

# import the necessary packages
from main.cnn.networks.lenet import LeNet
from sklearn.model_selection import train_test_split
from keras.datasets import mnist
from keras.optimizers import SGD
from keras.utils import np_utils
from keras import backend as K
import numpy as np
import argparse
import cv2 as cv
import ssl
from tensorflow import keras
from tensorflow.keras.preprocessing import image_dataset_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator 
train_datagen = ImageDataGenerator(
fill_mode='constant',
cval=255.0,
rotation_range=90,
zoom_range=[1.0, 1.3],
rescale=1.0/255.0
)
it = train_datagen.flow_from_directory('training_data/',
target_size=(260, 380),
batch_size=9,
save_to_dir='augmented_data/',
save_format='jpeg'
)
validation_ds = image_dataset_from_directory (
directory='validation_data/',
labels='inferred',
label_mode='categorical',
batch_size=1,
image_size=(380, 260)) 
# scale data to the range of [0, 1]
def normalize(data, labels):
return data / 255.0, labels 
validation_ds = validation_ds.map(normalize)
# initialize the optimize and model
print("[INFO] compiling model...")
model = keras.applications.Xception(weights=None, input_shape=(380, 260, 3), classes=3)
opt = SGD(lr=0.01)
model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"])
print("[INFO] training...")
model.fit(it, steps_per_epoch=1, epochs=100, verbose=1)
# show the accuracy on the testing set
print("[INFO] evaluating...")
(loss, accuracy) = model.evaluate(validation_ds, batch_size=3, verbose=1)
print("[INFO] accuracy: {:.2f}%".format(accuracy * 100))
for img, label in validation_ds:
probs = model.predict(img)
prediction = probs.argmax(axis=1)
print("PREDICTION: " + str(probs))
print("ACTUAL LABEL: " + str(label))

我的训练精度收敛到1.0000,但对model.prpredict的调用看起来是这样的:

[INFO] accuracy: 33.33%
PREDICTION: [[0.30813622 0.3550096  0.3368542 ]]
ACTUAL LABEL: tf.Tensor([[1. 0. 0.]], shape=(1, 3), dtype=float32)
PREDICTION: [[0.3081677  0.35502157 0.33681074]]
ACTUAL LABEL: tf.Tensor([[0. 1. 0.]], shape=(1, 3), dtype=float32)
PREDICTION: [[0.3081628  0.35502544 0.3368117 ]]
ACTUAL LABEL: tf.Tensor([[0. 1. 0.]], shape=(1, 3), dtype=float32)
PREDICTION: [[0.30813095 0.3550423  0.33682677]]
ACTUAL LABEL: tf.Tensor([[0. 0. 1.]], shape=(1, 3), dtype=float32)
...

出于某种原因,总是选择第二类,这就是为什么我的模型评估准确率一直停留在33.3%。我尝试过切换各种批量大小、学习率和其他超参数,但无法改变这个结果。

您的神经网络可能只是训练不足。您有一个9的批处理大小,每个时期1步,和100个时期。这意味着,到培训结束时,只有900个样本将通过网络。我不确定Keras在什么时候洗牌,但这甚至可能是相同图像的900倍。

我建议您将steps_per_epoch增加很多,并且可能还会增加批量大小。

最新更新