我正在尝试开发一个图像字幕模型。我指的是这个Github存储库。我有三种方法,它们执行以下操作:
- 生成图像模型
- 生成标题模型
- 将图像和标题模型连接在一起
由于代码很长,我创建了一个 Gist 来显示这些方法。
以下是我的图像模型和标题模型的摘要。
但是后来我运行代码,出现此错误:
TraceTraceback (most recent call last):
File "trainer.py", line 99, in <module>
model.fit([images, encoded_captions], one_hot_captions, batch_size = 1, epochs = 5)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/training.py", line 950, in fit
batch_size=batch_size)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/training.py", line 671, in _standardize_user_data
self._set_inputs(x)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/training.py", line 575, in _set_inputs
assert len(inputs) == 1
AssertionError
由于错误来自 Keras 库,我不知道如何调试它。但是当我尝试将它们连接在一起时,有些不对劲。
我想知道我是否在这里错过了什么
您需要使用output
属性获取模型的输出,然后使用 Keras 函数式 API 将它们连接起来(通过Concatenate
层或其等效的函数接口concatenate
(并创建最终模型:
from keras.models import Model
image_model = get_image_model()
language_model = get_language_model(vocab_size)
merged = concatenate([image_model.output, language_model.output])
x = LSTM(256, return_sequences = False)(merged)
x = Dense(vocab_size)(x)
out = Activation('softmax')(x)
model = Model([image_model.input, language_model.input], out)
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
model.fit([images, encoded_captions], one_hot_captions, ...)
与现在在代码中一样,您还可以为模型创建逻辑定义一个函数:
def get_concatenated_model(image_model, language_model, vocab_size):
merged = concatenate([image_model.output, language_model.output])
x = LSTM(256, return_sequences = False)(merged)
x = Dense(vocab_size)(x)
out = Activation('softmax')(x)
model = Model([image_model.input, language_model.input], out)
return model