形状与 vgg16 keras 不匹配:预期 ndim=4,找到 ndim=2,形状接收 [无,无]



在尝试学习keras和深度学习时,我想创建一个图像遮罩算法,该算法使用类似于修改后的自动编码器的架构,其中它接受两个图像输入(源图像和用户生成的三映射)并生成一个图像输出(图像前景的alpha值)。编码器部分(两个输入)是使用预训练的VGG16进行简单的特征提取。我想使用低分辨率 alphamatting.com 数据集训练解码器。

运行附加的代码会产生错误:ValueError: Input 0 of layer block1_conv1 is incompatible with the layer: expected ndim=4, found ndim=2. Full shape received: [None, None]

我无法理解此错误。我验证了我的twin_gen闭包正在为两个输入生成形状 (22, 256,256,3) 的图像批次,所以我猜问题是我不知何故错误地创建了模型,但我看不到错误在哪里。任何人都可以帮助阐明我如何看到此错误?

import tensorflow as tf
from tensorflow.keras.applications.vgg16 import VGG16, preprocess_input
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Conv2DTranspose, Concatenate, BatchNormalization, Input
from tensorflow.keras.preprocessing.image import ImageDataGenerator

def DeConvBlock(input, num_output):
x = Conv2DTranspose(num_output, kernel_size=3, strides=2, activation='relu', padding='same')(input)
x = BatchNormalization()(x)
x = Conv2DTranspose(num_output, kernel_size=3, strides=1, activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Conv2DTranspose(num_output, kernel_size=3, strides=1, activation='relu', padding='same')(x)
x = BatchNormalization()(x)
return x

img_input = Input((256, 256, 3))
img_vgg16 = VGG16(include_top=False, weights='imagenet')
img_vgg16._name = 'img_vgg16'
img_vgg16.trainable = False

tm_input = Input((256, 256, 3))
tm_vgg16 = VGG16(include_top=False, weights='imagenet')
tm_vgg16._name = 'tm_vgg16'
tm_vgg16.trainable = False
img_vgg16 = img_vgg16(img_input)
tm_vgg16 = tm_vgg16(tm_input)
x = Concatenate()([img_vgg16, tm_vgg16])
x = DeConvBlock(x, 512)
x = DeConvBlock(x, 256)
x = DeConvBlock(x, 128)
x = DeConvBlock(x, 64)
x = DeConvBlock(x, 32)
x = Conv2DTranspose(1, kernel_size=3, strides=1, activation='sigmoid', padding='same')(x)

m = Model(inputs=[img_input, tm_input], outputs=x)
m.summary()
m.compile(optimizer='adam', loss='mean_squared_error')
gen = ImageDataGenerator(width_shift_range=0.1, rotation_range=30, height_shift_range=0.1, horizontal_flip=True, validation_split=0.2, preprocessing_function=preprocess_input)
SEED = 49

def twin_gen(generator, subset):
gen_img = generator.flow_from_directory('./data', classes=['input_training_lowres'], seed=SEED, shuffle=False, subset=subset, color_mode='rgb')
gen_map = generator.flow_from_directory('./data/trimap_training_lowres', classes=['Trimap1'], seed=SEED, shuffle=False, subset=subset, color_mode='rgb')
gen_truth = generator.flow_from_directory('./data', classes=['gt_training_lowres'], seed=SEED, shuffle=False, subset=subset, color_mode='rgb')
while True:
img = gen_img.__next__()
tm = gen_map.__next__()
gt = gen_truth.__next__()
yield [[img, tm], gt]

train_gen = twin_gen(gen, 'training')
val_gen = twin_gen(gen, 'validation')

checkpoint_filepath = 'checkpoint'
checkpoint = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_filepath,
save_weights_only=True,
monitor='val_loss',
mode='auto',
save_freq='epoch',
save_best_only=True)

r = m.fit(train_gen, validation_data=val_gen, epochs=10, callbacks=[checkpoint])

首先,您没有指定VGG16的输入形状并设置了include_top=False,因此channels_last情况下将(None, None ,3)默认输入形状。

PS:您可以查看keras.applications.VGG16keras.applications.imagenet_utils.obtain_input_shape的源代码以获取详细信息。

正如您可以通过调用model.summary()看到输出None形状:

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 256, 256, 3) 0                                            
__________________________________________________________________________________________________
input_3 (InputLayer)            [(None, 256, 256, 3) 0                                            
__________________________________________________________________________________________________
img_vgg16 (Functional)          (None, None, None, 5 14714688    input_1[0][0]                    
__________________________________________________________________________________________________
tm_vgg16 (Functional)           (None, None, None, 5 14714688    input_3[0][0]                    
__________________________________________________________________________________________________
concatenate (Concatenate)       (None, 8, 8, 1024)   0           img_vgg16[0][0]                  
tm_vgg16[0][0]                   
__________________________________________________________________________________________________

要解决此问题,您只需在VGG16中设置input_shape=(256, 256, 3),现在调用model.summary()将为您提供:

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
input_1 (InputLayer)            [(None, 256, 256, 3) 0
__________________________________________________________________________________________________
input_3 (InputLayer)            [(None, 256, 256, 3) 0
__________________________________________________________________________________________________
img_vgg16 (Functional)          (None, 8, 8, 512)    14714688    input_1[0][0]
__________________________________________________________________________________________________
tm_vgg16 (Functional)           (None, 8, 8, 512)    14714688    input_3[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate)       (None, 8, 8, 1024)   0           img_vgg16[0][0]
tm_vgg16[0][0]
__________________________________________________________________________________________________

错误的主要原因是当您调用__next__()它返回两个数组的元组时,(data, label)形状为((batch_size, 256, 256, 3), (batch_size, 1)),但我们实际上只想要第一个。

此外,数据生成器应该产生tuple而不是list否则不会为任何变量提供梯度,因为fit函数期望(inputs, targets)作为数据生成器的返回。

而且您还有另一个问题,即当您使用color_mode='rgb'加载gen_truth图像时,模型的输出形状是(batch_size, 256, 256, 1)但您的gen_truth元素形状(batch_size, 256, 256, 3),为了获得与模型输出相同的形状,您应该使用color_mode='grayscale'加载gen_truth如果您有灰度图像或使用color_mode='rgba'加载它,如果您想使用 alpha 值,请获取最后一个通道值(我只是猜测它来自您问题中的描述,但您应该明白这一点)

运行没有任何问题的示例代码:

import tensorflow as tf
from tensorflow.keras.applications.vgg16 import VGG16, preprocess_input
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Conv2DTranspose, Concatenate, BatchNormalization, Input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
def DeConvBlock(input, num_output):
x = Conv2DTranspose(num_output, kernel_size=3, strides=2, activation='relu', padding='same')(input)
x = BatchNormalization()(x)
x = Conv2DTranspose(num_output, kernel_size=3, strides=1, activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Conv2DTranspose(num_output, kernel_size=3, strides=1, activation='relu', padding='same')(x)
x = BatchNormalization()(x)
return x
img_input = Input((256, 256, 3))
img_vgg16 = VGG16(include_top=False, input_shape=(256, 256, 3), weights='imagenet')
img_vgg16._name = 'img_vgg16'
img_vgg16.trainable = False
tm_input = Input((256, 256, 3))
tm_vgg16 = VGG16(include_top=False, input_shape=(256, 256, 3), weights='imagenet')
tm_vgg16._name = 'tm_vgg16'
tm_vgg16.trainable = False
img_vgg16 = img_vgg16(img_input)
tm_vgg16 = tm_vgg16(tm_input)
x = Concatenate()([img_vgg16, tm_vgg16])
x = DeConvBlock(x, 512)
x = DeConvBlock(x, 256)
x = DeConvBlock(x, 128)
x = DeConvBlock(x, 64)
x = DeConvBlock(x, 32)
x = Conv2DTranspose(1, kernel_size=3, strides=1, activation='sigmoid', padding='same')(x)
m = Model(inputs=[img_input, tm_input], outputs=x)
m.summary()
m.compile(optimizer='adam', loss='mse')
gen = ImageDataGenerator(width_shift_range=0.1, rotation_range=30, height_shift_range=0.1, horizontal_flip=True, validation_split=0.2, preprocessing_function=preprocess_input)
SEED = 49
def twin_gen(generator, subset):
gen_img = generator.flow_from_directory('./data', classes=['input_training_lowres'], seed=SEED, shuffle=False, subset=subset, color_mode='rgb')
gen_map = generator.flow_from_directory('./data/trimap_training_lowres', classes=['Trimap1'], seed=SEED, shuffle=False, subset=subset, color_mode='rgb')
gen_truth = generator.flow_from_directory('./data', classes=['gt_training_lowres'], seed=SEED, shuffle=False, subset=subset, color_mode='grayscale')
while True:
img = gen_img.__next__()[0]
tm = gen_map.__next__()[0]
gt = gen_truth.__next__()[0]
yield ([img, tm], gt)
train_gen = twin_gen(gen, 'training')
r = m.fit(train_gen, steps_per_epoch=5, epochs=3)

相关内容

最新更新