在卷积自动编码器中更改解码器输出形状以进行训练



我正在尝试为卷积自动编码器模型训练我的数据集。我已经修补了 500,400x400x1 数据集。我的模型如下所示。

input_img = Input(shape = (400, 400, 1))
def encoder(input_img):
#encoder
conv1 = Conv2D(4, (3, 3), activation='relu', padding='same')(input_img) 
conv1 = BatchNormalization()(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(8, (3, 3), activation='relu', padding='same')(pool1)
conv2 = BatchNormalization()(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) 
conv3 = Conv2D(16, (3, 3), activation='relu', padding='same')(pool2) 
conv3 = BatchNormalization()(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(32, (3, 3), activation='relu', padding='same')(pool3)
conv4 = BatchNormalization()(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
conv5 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool4)
conv5 = BatchNormalization()(conv5)
pool5 = MaxPooling2D(pool_size=(2, 2))(conv5)
conv6 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool5)
conv6 = BatchNormalization()(conv6)
pool6 = MaxPooling2D(pool_size=(2, 2))(conv6)
conv7 = Conv2D(256, (3, 3), activation='relu', padding='same')(pool6) 
conv7 = BatchNormalization()(conv7)
pool7 = MaxPooling2D(pool_size=(2, 2))(conv7)
conv8 = Conv2D(512, (3, 3), activation='relu', padding='same')(pool7)
conv8 = BatchNormalization()(conv8)
return conv8

def decoder(conv8):    
#decoder
conv9 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv8) 
conv9 = BatchNormalization()(conv9)
up1 = UpSampling2D((2,2))(conv9) 
conv12 = Conv2D(256, (3, 3), activation='relu', padding='same')(up1)
conv12 = BatchNormalization()(conv12)
up2 = UpSampling2D((2,2))(conv12) 
conv13 = Conv2D(128, (3, 3), activation='relu', padding='same')(up2) 
conv13 = BatchNormalization()(conv13)
up3 = UpSampling2D((2,2))(conv13) 
conv14 = Conv2D(64, (3, 3), activation='relu', padding='same')(up3)
conv14 = BatchNormalization()(conv14)
up4 = UpSampling2D((2,2))(conv14) 
conv15 = Conv2D(32, (3, 3), activation='relu', padding='same')(up4)
conv15 = BatchNormalization()(conv15)
up5 = UpSampling2D((2,2))(conv15) 
conv16 = Conv2D(16, (3, 3), activation='relu', padding='same')(up5)
conv16 = BatchNormalization()(conv16)
up6 = UpSampling2D((2,2))(conv16) 
conv17 = Conv2D(8, (3, 3), activation='relu', padding='same')(up6) 
conv17 = BatchNormalization()(conv17)
up7 = UpSampling2D((2,2))(conv17) 
conv18 = Conv2D(4, (3, 3), activation='relu', padding='same')(up7) 
conv18 = BatchNormalization()(conv18)
decoded = Conv2D(1, (3,3), activation='relu', padding='same')(conv18)
return decoded

在编码器过程之后,形状变为 (3,3,512(。但是当解码器过程完成后,形状会上升到 (384,384,1(。由于我的输入形状为 (400,400,1(,因此无法训练模型。是否有任何上采样方法可以将我的解码器 (384,384,1( 层更改为 (400,400,1(?

您可以增加中间 Con2D 图层中的填充以弥补 16 个像素。有关如何控制二维填充的信息,请参阅此文档。

您还可以使用显式填充图层。

相关内容

  • 没有找到相关文章

最新更新