我正在制作一个深度多模式自动编码器,它接受两个输入并产生两个输出(这是重建的输入(。两个输入的形状分别为(1000,50(和(1000,60(,该模型有3个隐藏层,旨在连接输入1和输入2这两个潜在层。
这是模型的完整代码:
input_X = Input(shape=(X[0].shape))
dense_X = Dense(40,activation='relu')(input_X)
dense1_X = Dense(20,activation='relu')(dense_X)
latent_X= Dense(2,activation='relu')(dense1_X)
input_X1 = Input(shape=(X1[0].shape))
dense_X1 = Dense(40,activation='relu')(input_X1)
dense1_X1 = Dense(20,activation='relu')(dense_X1)
latent_X1= Dense(2,activation='relu')(dense1_X1)
Concat_X_X1 = concatenate([latent_X, latent_X1])
decoding_X = Dense(20,activation='relu')(Concat_X_X1)
decoding1_X = Dense(40,activation='relu')(decoding_X)
output_X = Dense(X[0].shape[0],activation='sigmoid')(decoding1_X)
decoding_X1 = Dense(20,activation='relu')(Concat_X_X1)
decoding1_X1 = Dense(40,activation='relu')(decoding_X1)
output_X1 = Dense(X1[0].shape[0],activation='sigmoid')(decoding1_X1)
multi_modal_autoencoder = Model([input_X, input_X1], [output_X, output_X1], name='multi_modal_autoencoder')
encoder = Model([input_X, input_X1], Concat_X_X1)
encoder.save('encoder.h5')
multi_modal_autoencoder.compile(optimizer=keras.optimizers.Adam(lr=0.001),loss='mse')
model = multi_modal_autoencoder.fit([X,X1], [X, X1], epochs=70, batch_size=150)
我想从编码器返回潜在的表示,它将表现为形状为(1000,4(的numpy数组,然后将其用作另一个模型的输入。希望,有人知道帮助我实现它。为此,我尝试了以下建议:
file = h5py.File('encoder.h5', 'r')
keys = list(file.keys()) #it returns models weights as key
value = file.get('model_weights') #<HDF5 group "/model_weights" (9 members)>
the 9 members are ['concatenate_1', 'dense_1', 'dense_2', 'dense_3', 'dense_4', 'dense_5', 'dense_6', 'input_1', 'input_2'].
file['/model_weights/concatenate_1']) returns <HDF5 group "/model_weights/concatenate_1" (0 members)>
value = file['/model_weights/concatenate_1'][:]
但它返回一个错误:
AttributeError Traceback (most recent call last)
<ipython-input-18-7bc6cbac9468> in <module>
----> 1 value = file['/model_weights/concatenate_1'][:]
h5py_objects.pyx in h5py._objects.with_phil.wrapper()
h5py_objects.pyx in h5py._objects.with_phil.wrapper()
~Anaconda3envstensorflowlibsite-packagesh5py_hlgroup.py in __getitem__(self, name)
260 raise ValueError("Invalid HDF5 object reference")
261 else:
--> 262 oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
263
264 otype = h5i.get_type(oid)
~Anaconda3envstensorflowlibsite-packagesh5py_hlbase.py in _e(self, name, lcpl)
135 else:
136 try:
--> 137 name = name.encode('ascii')
138 coding = h5t.CSET_ASCII
139 except UnicodeEncodeError:
AttributeError: 'slice' object has no attribute 'encode'
我假设X[0].shape[0]
和X1[0].shape[0]
相等,因为它是一个致密层,所以应该是4000。您已经设法进入了训练阶段,但我最好说Model.fit
的回报值是训练期间实现损失的历史对象。那么,名为model
的对象实际上并不是一个模型。
要用这个训练过的模型预测值,您需要调用Model.predict()
,在您的情况下应该是:
multi_modal_autoencoder.predict([D1,D2])
Model.predict()
返回numpy个预测数组,在检索输入的预测后,可能需要两个数组和整形方法。然后,您可以将此输出用作下一个网络的输入。
我强烈鼓励您阅读文档