我使用TensorFlow 2.0构建了一个完全连接的神经网络,一个去噪自动编码器。网络结构为128-64-32-64-128(神经元数量(,经过五层网络处理后,输入1000个数据,输出1000个数据。
我的代码如下(Model(((:
# Denoising Autoencoder
input_size = 1000
hidden_size = 128
hidden_size_2 = 64
code_size = 32
output_size = 1000
x = Input(shape=(input_size, ))
hidden_1 = Dense(hidden_size,activation='relu')(x) # 128
hidden_2 = Dense(hidden_size_2, activation='relu')(hidden_1) # 64
hidden_3 = Dense(code_size, activation='relu')(hidden_2) # 32
hidden_4 = Dense(hidden_size_2,activation='relu')(hidden_3) # 64
hidden_5 = Dense(hidden_size, activation='relu')(hidden_4) # 128
r = Dense(output_size,activation='sigmoid')(hidden_5)
autoencoder = Model(inputs=x, outputs=r)
autoencoder.compile(optimizer='adam', loss='mse',metrics=["mse"])
和相应的model.summary((如下:
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 1000) 0
_________________________________________________________________
dense_1 (Dense) (None, 128) 128128
_________________________________________________________________
dense_2 (Dense) (None, 64) 8256
_________________________________________________________________
dense_3 (Dense) (None, 32) 2080
_________________________________________________________________
dense_4 (Dense) (None, 64) 2112
_________________________________________________________________
dense_5 (Dense) (None, 128) 8320
_________________________________________________________________
dense_6 (Dense) (None, 1000) 129000
=================================================================
Total params: 277,896
Trainable params: 277,896
Non-trainable params: 0
我想使用sequential((模式重写这个神经网络(因为我需要的Python库必须使用Sequeential(((。我的代码如下:
input_size_1000 = 1000
hidden_size_128 = 128
hidden_size_64 = 64
code_size_32 = 32
output_size_1000 = 1000
# using sequential()
model = tf.keras.models.Sequential()
# model.add(tf.keras.layers.Input(input_size_1000,))
model.add(tf.keras.layers.Dense(hidden_size_128,input_dim=1000,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(hidden_size_64,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(code_size_32,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(hidden_size_64,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(hidden_size_128,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(output_size_1000,activation=tf.nn.sigmoid))
model.compile(optimizer='adam', loss='mse', metrics=["mse"])
和相应的model.summary((如下:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 128) 128128
_________________________________________________________________
dense_1 (Dense) (None, 64) 8256
_________________________________________________________________
dense_2 (Dense) (None, 32) 2080
_________________________________________________________________
dense_3 (Dense) (None, 64) 2112
_________________________________________________________________
dense_4 (Dense) (None, 128) 8320
_________________________________________________________________
dense_5 (Dense) (None, 1000) 129000
=================================================================
Total params: 277,896
Trainable params: 277,896
Non-trainable params: 0
_________________________________________________________________
None
我想知道为什么在重写的代码中没有输入层(我添加了input_dim=1000(。起初我添加了model.add(tf.keras.layers.Input(input_size_1000,))
,但没有起作用。然后我命令这一行,并添加inpu_dim=1000,仍然没有input_layer。顺便问一下,我的重写有什么问题吗?因为我要使用的库没有输入层,而它提供了一个平坦层,如果删除输入层会有什么影响吗?如果我使用展平层而不是输入层(我的数据是一维列表((因为在MNIST图片展平后,库中的样本是密集的(,该怎么办?谢谢
必须指定密集层的输入维度。在您的序列模型中,它是input_dim
。使用print(model.inputs)
,您将看到输入张量。
如果使用Flatten layer,则必须指定Flatten layer的input_shape
,以便下一层(Dense(的输入尺寸清晰,例如:tf.keras.layers.Flatten(input_shape=[24, 24])