由于标题清楚地描述了这个问题,我想在model.summary()
函数输出中显示预绑定模型的层,而不是单个条目(请参见下面的vgg19 (Functional)
条目(?
以下是使用Keras Sequential API
:实现的示例模型
base_model = VGG16(include_top=False, weights=None, input_shape=(32, 32, 3), pooling='max', classes=10)
model = Sequential()
model.add(base_model)
model.add(Flatten())
model.add(Dense(1_000, activation='relu'))
model.add(Dense(10, activation='softmax'))
这是model.summary()
函数调用的输出:
Model: "sequential_15"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
vgg19 (Functional) (None, 512) 20024384
_________________________________________________________________
flatten_15 (Flatten) (None, 512) 0
_________________________________________________________________
dense_21 (Dense) (None, 1000) 513000
_________________________________________________________________
dense_22 (Dense) (None, 10) 10010
=================================================================
Total params: 20,547,394
Trainable params: 523,010
Non-trainable params: 20,024,384
编辑:这是实现的Sequential API
模型的Functional API
等价物-结果相同:
base_model = VGG16(include_top=False, weights='imagenet', input_shape=(32, 32, 3), pooling='max', classes=10)
m_inputs = Input(shape=(32, 32, 3))
base_out = base_model(m_inputs)
x = Flatten()(base_out)
x = Dense(1_000, activation='relu')(x)
m_outputs = Dense(10, activation='softmax')(x)
model = Model(inputs=m_inputs, outputs=m_outputs)
我没有使用Sequential
,而是尝试使用Functional API,即tf.keras.models.Model
类,如
import tensorflow as tf
base_model = tf.keras.applications.VGG16(include_top=False, weights=None, input_shape=(32, 32, 3), pooling='max', classes=10)
x = tf.keras.layers.Flatten()( base_model.output )
x = tf.keras.layers.Dense(1_000, activation='relu')( x )
outputs = tf.keras.layers.Dense(10, activation='softmax')( x )
model = tf.keras.models.Model( base_model.input , outputs )
model.summary()
上面片段的输出,
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) [(None, 32, 32, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 32, 32, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 32, 32, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 16, 16, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 16, 16, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 16, 16, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 8, 8, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 8, 8, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 8, 8, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 8, 8, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 4, 4, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 4, 4, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 4, 4, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 4, 4, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 2, 2, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 2, 2, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 2, 2, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 2, 2, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 1, 1, 512) 0
_________________________________________________________________
global_max_pooling2d_2 (Glob (None, 512) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 1000) 513000
_________________________________________________________________
dense_3 (Dense) (None, 10) 10010
=================================================================
Total params: 15,237,698
Trainable params: 15,237,698
Non-trainable params: 0
_________________________________________________________________
在浏览了文档并运行了一些测试(通过TF 2.5.0(后,我的理解是,当这样的模型包含在另一个模型中时,Keras将其视为";黑盒";。它不是一个简单的层,绝对没有张量,基本上是复杂类型tensorflow.python.keras.engine.functional.functional.
我认为这就是你不能把它作为模型摘要的一部分详细打印出来的根本原因。
- 现在,如果你只想回顾预先训练的模型,有一个潜在的峰值等,你可以简单地运行:
base_model.summary()
或者在构建你的模型之后(顺序的或函数的,在这一点上无关紧要(:
model.layers[i].summary() # i: the index of your pre-trained model
如果您需要访问预训练模型的层,例如单独使用其权重等,您也可以通过这种方式访问它们。
- 如果您想打印整个模型的图层,那么您需要欺骗Keras,使其相信";黑盒";并不陌生,只是另一个KerasSensor。为了做到这一点,您可以将预先训练的模型包装在另一层中-换句话说,通过Functional API直接连接它们-这是上面建议的,对我来说很好
x = tf.keras.layers.Flatten()( base_model.output )
我不知道是否有任何特定的原因,你想追求新的输入路线,如在…
m_inputs=输入(形状=(32,32,3((
base_out=base_model(m_inputs(
每当您将预先训练的模型定位在新模型的中间时,即在新的输入层之后或将其添加到序列模型本身时,其中的层将从摘要输出中消失。
在这种情况下,生成一个新的输入层或只是将预先训练好的模型的输出作为输入提供给当前模型对我来说没有任何区别。
希望这能稍微澄清一下这个话题,并有所帮助。
这应该做你想做的
base_model = VGG16(include_top=False, weights=None, input_shape=(32, 32, 3), pooling='max', classes=10)
model = Sequential()
for layer in base_model.layers:
layer.trainable = False
model.add(layer)
model.add(Flatten())
model.add(Dense(1_000, activation='relu'))
model.add(Dense(10, activation='softmax'))