用tensorflow训练VGG 16模型



我正在尝试使用keras的VGG16来训练用于图像检测的模型。

根据这些文章(https://www.pyimagesearch.com/2019/06/03/fine-tuning-with-keras-and-deep-learning/和https://learnopencv.com/keras-tutorial-fine-tuning-using-pre-trained-models/),我在VGG 16模型中添加了一些额外的Dense层。然而,20个epoch的训练准确率在35%到41%左右,与这些文章的结果(90%以上)不匹配。

因此,我想知道,我是否在下面的代码中做错了什么。

基本设置

url='/content/drive/My Drive/fer2013.csv'
batch_size = 64
img_width,img_height = 48,48
# 0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral
num_classes = 7 
model_path = '/content/drive/My Drive/Af/cnn.h5'
df=pd.read_csv(url)  
def _load_fer():
# Load training and eval data
df = pd.read_csv(url, sep=',')
train_df = df[df['Usage'] == 'Training']
eval_df = df[df['Usage'] == 'PublicTest']
return train_df, eval_df
def _preprocess_fer(df,label_col='emotion',feature_col='pixels'):
labels, features = df.loc[:, label_col].values.astype(np.int32), [
np.fromstring(image, np.float32, sep=' ')
for image in df.loc[:, feature_col].values]

labels = [to_categorical(l, num_classes=num_classes) for l in labels]
features = np.stack((features,) * 3, axis=-1)
features /= 255
features = features.reshape(features.shape[0], img_width, img_height,3)
return features, labels
# Load fer data
train_df, eval_df = _load_fer()
# preprocess fer data
x_train, y_train = _preprocess_fer(train_df)
x_valid, y_valid = _preprocess_fer(eval_df)
gen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_generator = gen.flow(x_train, y_train, batch_size=batch_size)
predict_size_train = int(np.math.ceil(len(x_train) / batch_size)) 
input_tensor = Input(shape=(img_width, img_height, 3)) 

现在是模型训练部分

baseModel = VGG16(
include_top=False, weights='imagenet',
input_tensor=input_tensor
)
# Construct the head of the model that will be placed on top of the base model (fine tuning)
headModel = baseModel.output
headModel = Flatten()(headModel)
headModel = Dense(1024, activation="relu")(headModel)
#headModel = Dropout(0.5)(headModel)
headModel = BatchNormalization()(headModel)
headModel = Dense(num_classes, activation="softmax")(headModel)
model = Model(inputs=baseModel.input, outputs=headModel)
for layer in baseModel.layers:
layer.trainable = False

模型总结

model.compile(loss='categorical_crossentropy', 
optimizer=tf.keras.optimizers.Adam(lr=0.001), 
metrics=['accuracy'])
history = model.fit(train_generator, 
steps_per_epoch=predict_size_train * 1, 
epochs=20,
validation_data=valid_generator,
validation_steps=predict_size_valid)

结果:培训后的结果我将非常感谢你的建议。致以最亲切的问候。

由于冻结了所有层,因此只有一个密集层可能无法提供所需的准确性。另外,如果您不着急,也可以不设置validation_stepssteps_per_epochs参数。同样在本教程中,模型有波动,这是不希望的。

我建议:

for layer in baseModel.layers:
layer.trainable = False
base_out = baseModel.get_layer('block3_pool').output // layer name may be different, 
check with model baseModel.summary

你可以得到特定层的输出。得到输出后,你可以加上一些卷积。在卷积之后,尝试堆叠更密集的层,如:

x = tf.keras.layers.Flatten()(x)
x = Dense(512, activation= 'relu')(x)
x = Dropout(0.3)(x)
x = Dense(256, activation= 'relu')(x)
x = Dropout(0.2)(x)
output_model = Dense(num_classes, activation = 'softmax')(x)

如果你不想添加卷积并完全使用baseModel,那也可以,但是你可以这样做:

for layer in baseModel.layers[:12]: // 12 is random you can try different ones. Not 
all layers are frozen this time.
layer.trainable = False
for i, layer in enumerate(baseModel.layers):
print(i, layer.name, layer.trainable)
// check frozen layers   

之后,您可以尝试设置:

headModel = baseModel.output
headModel = Flatten()(headModel)
headModel = Dense(1024, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(512, activation="relu")(headModel)
headModel = Dense(num_classes, activation="softmax")(headModel)

如果你看到你的模型在学习,但是你的损失有波动,那么你可以降低学习率。或者你可以使用ReduceLROnPlateau回调:

rd_lr = ReduceLROnPlateau(monitor='val_loss', factor = np.sqrt(0.1), patience= 4, verbose = 1, min_lr = 5e-8)

参数完全取决于你的模型。要了解更多细节,请参阅docs

y_train的内容形式是什么?如果它们是整数值那么你需要用

将它们转换成一个热向量
y_train=tf.keras.utils.to_categorical(train, num_classes)

因为你在model.compile中使用了loss='categorical_crossentropy'。此外,VGG16要求像素在-1和+1之间缩放,因此在包含

gen = ImageDataGenerator(tf.keras.applications.vgg16.preprocess_input, etc

当你在训练时,你有

for layer in baseModel.layers:
layer.trainable = False

所以你只训练密集层,这是可以的,但可能不会给你高的准确性。你可能想让VGG成为可训练的,但这当然需要更长的时间。或者在你使用不可训练的VGG进行训练之后,然后将其更改为可训练的,并再运行几个epoch来微调模型。

相关内容

  • 没有找到相关文章

最新更新