在 Keras 中将 VGG 功能模型转换为顺序模型



我实际上是在尝试使用Keras获取VGG16的顺序模型版本。功能版本可以通过以下方式获得:

from __future__ import division, print_function
import os, json
from glob import glob
import numpy as np
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
from keras import backend as K
from keras.layers.normalization import BatchNormalization
from keras.utils.data_utils import get_file
from keras.models import Sequential
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.layers.pooling import GlobalAveragePooling2D
from keras.optimizers import SGD, RMSprop, Adam
from keras.preprocessing import image
import keras   
import keras.applications.vgg16
from  keras.layers import Input
input_tensor = Input(shape=(224,224,3))
VGG_model=keras.applications.vgg16.VGG16(weights='imagenet',include_top= True,input_tensor=input_tensor)

它的总结是这样的:

VGG_model.summary()
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
input_1 (InputLayer)             (None, 224, 224, 3)   0                                            
____________________________________________________________________________________________________
block1_conv1 (Convolution2D)     (None, 224, 224, 64)  1792        input_1[0][0]                    
____________________________________________________________________________________________________
block1_conv2 (Convolution2D)     (None, 224, 224, 64)  36928       block1_conv1[0][0]               
____________________________________________________________________________________________________
block1_pool (MaxPooling2D)       (None, 112, 112, 64)  0           block1_conv2[0][0]               
____________________________________________________________________________________________________
block2_conv1 (Convolution2D)     (None, 112, 112, 128) 73856       block1_pool[0][0]                
____________________________________________________________________________________________________
block2_conv2 (Convolution2D)     (None, 112, 112, 128) 147584      block2_conv1[0][0]               
____________________________________________________________________________________________________
block2_pool (MaxPooling2D)       (None, 56, 56, 128)   0           block2_conv2[0][0]               
____________________________________________________________________________________________________
block3_conv1 (Convolution2D)     (None, 56, 56, 256)   295168      block2_pool[0][0]                
____________________________________________________________________________________________________
block3_conv2 (Convolution2D)     (None, 56, 56, 256)   590080      block3_conv1[0][0]               
____________________________________________________________________________________________________
block3_conv3 (Convolution2D)     (None, 56, 56, 256)   590080      block3_conv2[0][0]               
____________________________________________________________________________________________________
block3_pool (MaxPooling2D)       (None, 28, 28, 256)   0           block3_conv3[0][0]               
____________________________________________________________________________________________________
block4_conv1 (Convolution2D)     (None, 28, 28, 512)   1180160     block3_pool[0][0]                
____________________________________________________________________________________________________
block4_conv2 (Convolution2D)     (None, 28, 28, 512)   2359808     block4_conv1[0][0]               
____________________________________________________________________________________________________
block4_conv3 (Convolution2D)     (None, 28, 28, 512)   2359808     block4_conv2[0][0]               
____________________________________________________________________________________________________
block4_pool (MaxPooling2D)       (None, 14, 14, 512)   0           block4_conv3[0][0]               
____________________________________________________________________________________________________
block5_conv1 (Convolution2D)     (None, 14, 14, 512)   2359808     block4_pool[0][0]                
____________________________________________________________________________________________________
block5_conv2 (Convolution2D)     (None, 14, 14, 512)   2359808     block5_conv1[0][0]               
____________________________________________________________________________________________________
block5_conv3 (Convolution2D)     (None, 14, 14, 512)   2359808     block5_conv2[0][0]               
____________________________________________________________________________________________________
block5_pool (MaxPooling2D)       (None, 7, 7, 512)     0           block5_conv3[0][0]               
____________________________________________________________________________________________________
flatten (Flatten)                (None, 25088)         0           block5_pool[0][0]                
____________________________________________________________________________________________________
fc1 (Dense)                      (None, 4096)          102764544   flatten[0][0]                    
____________________________________________________________________________________________________
fc2 (Dense)                      (None, 4096)          16781312    fc1[0][0]                        
____________________________________________________________________________________________________
predictions (Dense)              (None, 1000)          4097000     fc2[0][0]                        
====================================================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
____________________________________________________________________________________________________

根据这个网站 https://github.com/fchollet/keras/issues/3190,它说

Sequential(layers=functional_model.layers)

可以将功能模型隐蔽为顺序模型。但是,如果我这样做:

model = Sequential(layers=VGG_model.layers)
model.summary()

它导致

Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
input_1 (InputLayer)             (None, 224, 224, 3)   0                                            
____________________________________________________________________________________________________
block1_conv1 (Convolution2D)     (None, 224, 224, 64)  1792        input_1[0][0]                    
input_1[0][0]                    
input_1[0][0]                    
____________________________________________________________________________________________________
block1_conv2 (Convolution2D)     (None, 224, 224, 64)  36928       block1_conv1[0][0]               
block1_conv1[1][0]               
block1_conv1[2][0]               
____________________________________________________________________________________________________
block1_pool (MaxPooling2D)       (None, 112, 112, 64)  0           block1_conv2[0][0]               
block1_conv2[1][0]               
block1_conv2[2][0]               
____________________________________________________________________________________________________
block2_conv1 (Convolution2D)     (None, 112, 112, 128) 73856       block1_pool[0][0]                
block1_pool[1][0]                
block1_pool[2][0]                
____________________________________________________________________________________________________
block2_conv2 (Convolution2D)     (None, 112, 112, 128) 147584      block2_conv1[0][0]               
block2_conv1[1][0]               
block2_conv1[2][0]               
____________________________________________________________________________________________________
block2_pool (MaxPooling2D)       (None, 56, 56, 128)   0           block2_conv2[0][0]               
block2_conv2[1][0]               
block2_conv2[2][0]               
____________________________________________________________________________________________________
block3_conv1 (Convolution2D)     (None, 56, 56, 256)   295168      block2_pool[0][0]                
block2_pool[1][0]                
block2_pool[2][0]                
____________________________________________________________________________________________________
block3_conv2 (Convolution2D)     (None, 56, 56, 256)   590080      block3_conv1[0][0]               
block3_conv1[1][0]               
block3_conv1[2][0]               
____________________________________________________________________________________________________
block3_conv3 (Convolution2D)     (None, 56, 56, 256)   590080      block3_conv2[0][0]               
block3_conv2[1][0]               
block3_conv2[2][0]               
____________________________________________________________________________________________________
block3_pool (MaxPooling2D)       (None, 28, 28, 256)   0           block3_conv3[0][0]               
block3_conv3[1][0]               
block3_conv3[2][0]               
____________________________________________________________________________________________________
block4_conv1 (Convolution2D)     (None, 28, 28, 512)   1180160     block3_pool[0][0]                
block3_pool[1][0]                
block3_pool[2][0]                
____________________________________________________________________________________________________
block4_conv2 (Convolution2D)     (None, 28, 28, 512)   2359808     block4_conv1[0][0]               
block4_conv1[1][0]               
block4_conv1[2][0]               
____________________________________________________________________________________________________
block4_conv3 (Convolution2D)     (None, 28, 28, 512)   2359808     block4_conv2[0][0]               
block4_conv2[1][0]               
block4_conv2[2][0]               
____________________________________________________________________________________________________
block4_pool (MaxPooling2D)       (None, 14, 14, 512)   0           block4_conv3[0][0]               
block4_conv3[1][0]               
block4_conv3[2][0]               
____________________________________________________________________________________________________
block5_conv1 (Convolution2D)     (None, 14, 14, 512)   2359808     block4_pool[0][0]                
block4_pool[1][0]                
block4_pool[2][0]                
____________________________________________________________________________________________________
block5_conv2 (Convolution2D)     (None, 14, 14, 512)   2359808     block5_conv1[0][0]               
block5_conv1[1][0]               
block5_conv1[2][0]               
____________________________________________________________________________________________________
block5_conv3 (Convolution2D)     (None, 14, 14, 512)   2359808     block5_conv2[0][0]               
block5_conv2[1][0]               
block5_conv2[2][0]               
____________________________________________________________________________________________________
block5_pool (MaxPooling2D)       (None, 7, 7, 512)     0           block5_conv3[0][0]               
block5_conv3[1][0]               
block5_conv3[2][0]               
____________________________________________________________________________________________________
flatten (Flatten)                (None, 25088)         0           block5_pool[0][0]                
block5_pool[1][0]                
block5_pool[2][0]                
____________________________________________________________________________________________________
fc1 (Dense)                      (None, 4096)          102764544   flatten[0][0]                    
flatten[1][0]                    
flatten[2][0]                    
____________________________________________________________________________________________________
fc2 (Dense)                      (None, 4096)          16781312    fc1[0][0]                        
fc1[1][0]                        
fc1[2][0]                        
____________________________________________________________________________________________________
predictions (Dense)              (None, 1000)          4097000     fc2[0][0]                        
fc2[1][0]                        
fc2[2][0]                        
====================================================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_

这与原始功能模型不同,因为新层与前一层连接了 3 次。人们说使用功能模型更强大。但我想做的只是弹出最终的预测层。而功能模型无法做到这一点...

我也一直在为此苦苦挣扎,之前的海报几乎就在那里,但遗漏了一个之前让我难倒的特定细节。 事实上,即使使用函数式 API 创建的模型,您也可以执行"弹出"操作,但这需要做更多的工作。

这是我的模型(只是普通的香草VGG16)

model.summary()
__
__ 图层(类型) 输出形状参数 # 连接到                     ==================================================================================================== input_6 (输入层) (无, 224, 224, 3) 0                                            ____________________________________________________________________________________________________ block1_conv1 (卷积2D) (无, 224, 224, 64) 1792 input_6[0][0]                    ____________________________________________________________________________________________________ block1_conv2 (卷积2D) (无, 224, 224, 64) 36928 block1_conv1[0][0]               ____________________________________________________________________________________________________ block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 block1_conv2[0][0]               ____________________________________________________________________________________________________ block2_conv1 (卷积2D) (无, 112, 112, 128) 73856 block1_pool[0][0]                ____________________________________________________________________________________________________ block2_conv2 (卷积2D) (无, 112, 112, 128) 147584 block2_conv1[0][0]               ____________________________________________________________________________________________________ block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 block2_conv2[0][0]               ____________________________________________________________________________________________________ block3_conv1 (卷积2D) (无, 56, 56, 256) 295168 block2_pool[0][0]                ____________________________________________________________________________________________________ block3_conv2 (卷积2D) (无, 56, 56, 256) 590080 block3_conv1[0][0]               ____________________________________________________________________________________________________ block3_conv3 (卷积2D) (无, 56, 56, 256) 590080 block3_conv2[0][0]               ____________________________________________________________________________________________________ block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 block3_conv3[0][0]               ____________________________________________________________________________________________________ block4_conv1 (卷积2D) (无, 28, 28, 512) 1180160 block3_pool[0][0]                ____________________________________________________________________________________________________ block4_conv2 (卷积2D) (无, 28, 28, 512) 2359808 block4_conv1[0][0]               ____________________________________________________________________________________________________ block4_conv3 (卷积2D) (无, 28, 28, 512) 2359808 block4_conv2[0][0]               ____________________________________________________________________________________________________ block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 block4_conv3[0][0]               ____________________________________________________________________________________________________ block5_conv1 (卷积2D) (无, 14, 14, 512) 2359808 block4_pool[0][0]                ____________________________________________________________________________________________________ block5_conv2 (卷积2D) (无, 14, 14, 512) 2359808 block5_conv1[0][0]               ____________________________________________________________________________________________________block5_conv3 (卷积2D) (无, 14, 14, 512) 2359808 block5_conv2[0][0]               ____________________________________________________________________________________________________ block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 block5_conv3[0][0]               ____________________________________________________________________________________________________ 扁平 (平坦) (无, 25088) 0 block5_pool[0][0]                ____________________________________________________________________________________________________ fc1 (密集) (无, 4096) 102764544 扁平[0][0]                    ____________________________________________________________________________________________________ fc2 (密集) (无, 4096) 16781312 fc1[0][0]                        ____________________________________________________________________________________________________ 预测 (密集) (无, 1000) 4097000 fc2[0][0]                        ==================================================================================================== 总参数:138,357,544 可训练参数:138,357,544 不可训练的参数:0 然后

我"弹出"了最后一层,但没有使用流行,只是使用函数式 API

#Get the last but one layer/tensor from the old model
last_layer = model.layers[-2].output
#Define the new layer/tensor for the new model
new_model = Dense(2, activation='softmax', name='Binary_predictions')(last_layer)
#Create the new model, with the old models input and the new_model tensor as the output
new_model = Model(model.input, new_model, name='Finetuned_VGG16')
#Set all layers,except the last one to not trainable
for layer in new_model.layers[:-1]: layer.trainable=False
#Compile the new model
new_model.compile(optimizer=Adam(lr=learning_rate),
loss='categorical_crossentropy', metrics=['accuracy'])
#now train with the new outputs (cats and dogs!)

这将创建一个新模型(new_model),其中最后一层被替换,旧层固定(不可训练)。

new_model.summary()
____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
input_6 (InputLayer)             (None, 224, 224, 3)   0                                            
____________________________________________________________________________________________________
block1_conv1 (Convolution2D)     (None, 224, 224, 64)  1792        input_6[0][0]                    
____________________________________________________________________________________________________
block1_conv2 (Convolution2D)     (None, 224, 224, 64)  36928       block1_conv1[0][0]               
____________________________________________________________________________________________________
block1_pool (MaxPooling2D)       (None, 112, 112, 64)  0           block1_conv2[0][0]               
____________________________________________________________________________________________________
block2_conv1 (Convolution2D)     (None, 112, 112, 128) 73856       block1_pool[0][0]                
____________________________________________________________________________________________________
block2_conv2 (Convolution2D)     (None, 112, 112, 128) 147584      block2_conv1[0][0]               
____________________________________________________________________________________________________
block2_pool (MaxPooling2D)       (None, 56, 56, 128)   0           block2_conv2[0][0]               
____________________________________________________________________________________________________
block3_conv1 (Convolution2D)     (None, 56, 56, 256)   295168      block2_pool[0][0]                
____________________________________________________________________________________________________
block3_conv2 (Convolution2D)     (None, 56, 56, 256)   590080      block3_conv1[0][0]               
____________________________________________________________________________________________________
block3_conv3 (Convolution2D)     (None, 56, 56, 256)   590080      block3_conv2[0][0]               
____________________________________________________________________________________________________
block3_pool (MaxPooling2D)       (None, 28, 28, 256)   0           block3_conv3[0][0]               
____________________________________________________________________________________________________
block4_conv1 (Convolution2D)     (None, 28, 28, 512)   1180160     block3_pool[0][0]                
____________________________________________________________________________________________________
block4_conv2 (Convolution2D)     (None, 28, 28, 512)   2359808     block4_conv1[0][0]               
____________________________________________________________________________________________________
block4_conv3 (Convolution2D)     (None, 28, 28, 512)   2359808     block4_conv2[0][0]               
____________________________________________________________________________________________________
block4_pool (MaxPooling2D)       (None, 14, 14, 512)   0           block4_conv3[0][0]               
____________________________________________________________________________________________________
block5_conv1 (Convolution2D)     (None, 14, 14, 512)   2359808     block4_pool[0][0]                
____________________________________________________________________________________________________
block5_conv2 (Convolution2D)     (None, 14, 14, 512)   2359808     block5_conv1[0][0]               
____________________________________________________________________________________________________
block5_conv3 (Convolution2D)     (None, 14, 14, 512)   2359808     block5_conv2[0][0]               
____________________________________________________________________________________________________
block5_pool (MaxPooling2D)       (None, 7, 7, 512)     0           block5_conv3[0][0]               
____________________________________________________________________________________________________
flatten (Flatten)                (None, 25088)         0           block5_pool[0][0]                
____________________________________________________________________________________________________
fc1 (Dense)                      (None, 4096)          102764544   flatten[0][0]                    
____________________________________________________________________________________________________
fc2 (Dense)                      (None, 4096)          16781312    fc1[0][0]                        
____________________________________________________________________________________________________
Binary_predictions (Dense)       (None, 2)             8194        fc2[0][0]                        
====================================================================================================
Total params: 134,268,738
Trainable params: 8,194
Non-trainable params: 134,260,544

棘手的部分是将 .output 作为最后一层,因为这使其成为张量。然后使用该张量作为新密集层的输入,并使该张量成为新模型中的最终输出......

希望有帮助...

托恩

您可以通过定义另一个Model将前一层作为输出来"弹出"最后一层:

poppedModel = Model(VGG_model.input,VGG_model.layers[-2].output)

此模型将共享与原始模型完全相同的权重,训练将影响两个模型。

弹出模型后,您可以添加自己的图层(甚至模型),没问题:

popOut = poppedModel(input_tensor)
newLayOut = SomeKerasLayer(blablabla)(popOut)
anotherModel = Model(input_tensor, newLayOut)
#anotherModel will also share weights with poppedModel and VGG_model in the layers they have in common.

但是,如果您打算在不影响 VGG 权重的情况下anotherModel训练新层,那么在编译anotherModel之前,您需要poppedModel.layers[i].trainable = FalsepoppedModel.trainable = False和其中的每一层

最新更新