Keras 验证精度远低于训练准确度,即使使用相同的数据集进行训练和验证也是如此



我们尝试了使用 Keras ResNet50 应用程序(Tensorflow 作为后端(的迁移学习,我们自己的数据集用于 2000 个类,其中 14000 张图像作为训练集,5261 张图像作为验证集。我们得到的训练结果在训练与验证的损失和准确性方面都有很大不同。然后,我们尝试使用相同的图像进行训练和验证,即用 14000 张图像训练并用相同的 14000 张图像进行验证,尝试的训练结果相似,即训练精度高,验证精度低。

Keras 版本:2.1.6

张量流版本:1.8.0

代码(用于训练和验证的相同数据集(如下,

from __future__ import print_function
from keras.applications.resnet50 import ResNet50
from keras.applications.resnet50 import preprocess_input, decode_predictions
from keras.models import *
from keras.layers import *
from keras.callbacks import *
from keras.preprocessing.image import ImageDataGenerator
from datetime import datetime
from keras.optimizers import SGD
import numpy as np
batch_size = 28   # tweak to your GPUs capacity
img_height = 224   # ResNetInceptionv2 & Xception like 299, ResNet50 & VGG like 224
img_width = img_height 
channels = 3
input_shape = (img_height, img_width, channels)
best_model = 'best_model.h5'
train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
train_generator = train_datagen.flow_from_directory(
'data/train',  # this is the target directory
target_size=(img_height, img_width), 
batch_size=batch_size,
class_mode='categorical')
classes = len(train_generator.class_indices)
n_of_train_samples = train_generator.samples
callbacks = [ModelCheckpoint(filepath=best_model, verbose=0, save_best_only=True),
EarlyStopping(monitor='val_acc', patience=3, verbose=0)]
base_model = ResNet50(input_shape=input_shape, weights='imagenet', include_top=False)
# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional ResNet50 layers
for layer in base_model.layers:
layer.trainable = False
pool_layer = [layer for layer in base_model.layers if layer.name == 'avg_pool'][0]
base_model = Model(base_model.input, pool_layer.input)
base_model.layers.pop()
dropout=[.25,.25]
dense=1024
last = base_model.output
a = MaxPooling2D(pool_size=(7,7),name='maxpool')(last)
b = AveragePooling2D(pool_size=(7,7),name='avgpool')(last)
x = concatenate([a,b], axis = 1)
x = Flatten()(x)
x = Dense(dense, init='uniform', activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(dropout[0])(x)
x = Dense(classes, activation='softmax')(x)
model = Model(base_model.input, outputs=x)
print("Start time: %s" % str(datetime.now()))
# compile the model (should be done *after* setting layers to non-trainable)
model.compile(optimizer=SGD(lr=1e-2, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
# train the model on the new data for a few epochs
model.fit_generator(
train_generator,
steps_per_epoch=n_of_train_samples//batch_size,
epochs=3,
validation_data=train_generator,
validation_steps=n_of_train_samples//batch_size,
callbacks=callbacks)
print("End time: %s" % str(datetime.now()))

训练结果如下

Found 14306 images belonging to 2000 classes.
Start time: 2018-05-21 10:51:34.459545
Epoch 1/3
510/510 [==============================] - 10459s 21s/step - loss: 5.6433 - acc: 0.1538 - val_loss: 9.8465 - val_acc: 0.0024
Epoch 2/3
510/510 [==============================] - 10258s 20s/step - loss: 1.3632 - acc: 0.8550 - val_loss: 10.3264 - val_acc: 0.0044
Epoch 3/3
510/510 [==============================] - 63640s 125s/step - loss: 0.2367 - acc: 0.9886 - val_loss: 10.4537 - val_acc: 0.0034
End time: 2018-05-22 10:17:42.028052

我们知道我们不应该使用相同的数据集进行训练和验证,但我们只是无法理解为什么当训练和验证的数据集相同时,Keras 在训练和验证的损失和准确性方面给我们很大的差异。

附言。我们尝试了相同的数据集,即使用库 ResNet50 尝试了 2000 个类,其中 14000 张图像作为训练集,5261 张图像作为验证集 fast.ai 训练损失和验证损失没有太大差异。fast.ai 库的代码和结果如下

from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
from datetime import datetime
PATH = "data/"
sz=224
arch=resnet50
bs=28
tfms = tfms_from_model(arch, sz)
data = ImageClassifierData.from_paths(PATH, tfms=tfms, bs=bs)
learn = ConvLearner.pretrained(arch, data, precompute=False)
print("Start time: %s" % str(datetime.now()))
learn.fit(1e-2, 5)
print("End time: %s" % str(datetime.now()))
Start time: 2018-05-02 18:08:51.644750
0%|          | 1/487 [00:14<2:00:00, 14.81s/it, loss=tensor(7.5704)]
[0.      6.13229 5.2504  0.26458]                                     
[1.      3.70098 2.74378 0.6752 ]                                     
[2.      1.80197 1.08414 0.88106]                                     
[3.      0.83221 0.50391 0.9424 ]                                     
[4.      0.45565 0.31056 0.95554]                                     
End time: 2018-05-03 00:27:13.147758

不是答案,而是查看每批不受影响的损失/指标的建议:

def batchEnd(batch,logs):
print("nfinished batch " + str(batch) + ": " + str(logs) + "n")
metricCallback = LambdaCallback(on_batch_end=batchEnd)
callbacks = [  metricCallback, 
ModelCheckpoint(filepath=best_model, verbose=0, save_best_only=True),
EarlyStopping(monitor='val_acc', patience=3, verbose=0)]

这样,您将看到每个批次的指标,而不受其他批次的影响。(假设 Keras 在显示一个时期的指标时进行某种平均/总计(。

每次开始拟合时 - 它都会给出不同的结果,因为初始权重的加载方式不同(在库的多线程环境中(...如果你有不平衡的数据集 - 也很难考虑结果的正确性......此外,我一直认为至少需要 50-100 个 epoch 才能获得相当可靠的结果(3 个是不够的(

相关内容

最新更新