Keras:通过共享LSTM层传递数组?



我想用时间序列数据构建一个网络,我正在按比例放大之前的问题实例。

# input = { sequence:list of int, time: int, score: float}
embed = Embedding(output_dim=100, input_dim=self.sequence_range + 1, mask_zero=True, name='sequence_embedding')
sq_inpt = Input(shape=(self.MAX_SEQ_LEN, 1), name='sq_inpt')
sq_embed = embed(sq_inpt)
lstm_embed = LSTM(200, go_backwards=False)(sq_embed)
time_inpt = Input(shape=(1,), name='time_inpt')
score_inpt = Input(shape=(1,), name='score_inpt')
state_embed = Concatenate()([lstm_embed, time_inpt, score_inpt])
state_embed = Dense(300, activation='elu', name='state_embed_1')(state_embed)
state_embed = Dense(300, activation='elu', name='state_embed_2')(state_embed)
output = Dense(1, name='output')(state_embed)
model = Model(inputs=[sq_inpt, time_inpt, score_inpt], outputs=output)

我之前的网络在输入上有一个嵌入层,它被馈送到LSTM中。LSTM的输出和另外两个数字输入被进一步馈送到两个Dense层,然后接收单个单元输出。

在新版本中,我想传递多个序列作为输入,每个序列都应该独立地通过LSTM层,然后在添加到state_embed之前进行连接。

# input = { sequences:list of(list of int), time: int, score: float}
embed = Embedding(output_dim=100, input_dim=self.sequence_range + 1, mask_zero=True, name='sequence_embedding')
sq_inpt = Input(shape=(self.MAX_SEQ_LEN, 1), name='sq_inpt')
sq_embed = embed(sq_inpt)
##CHANGE NEEDED HERE#######
lstm_layer = LSTM(200, go_backwards=False)
lstm_embed = []
for sequence in sq_embed:
lstm_embed.append(lstm_layer(sequence)
###########################
.
.
.
state_embed = Concatenate()([lstm_embed, time_inpt, score_inpt])
.
.
.
model = Model(inputs=[sq_inpt, time_inpt, score_inpt], outputs=output)
我知道上面的代码不工作,但这是我能想到的最清晰的方式来表示我想要的。我不希望每个序列都有一个唯一的LSTM层,因为我想共享权重。

如何在Keras中实现这一点?

正如您所说,您希望您的网络使用具有共享权重的多个LSTM处理多个序列,但是在您提供的代码中,只有一个序列作为您的网络的输入。这是我对两个序列的命题,例如:

sq_inpt1 = Input(shape=(self.MAX_SEQ_LEN, 1), name='sq_inpt1')
sq_inpt2 = Input(shape=(self.MAX_SEQ_LEN, 1), name='sq_inpt2')
sq_embd_1 = embed(sq_inpt1)
sq_embd_2 = embed(sq_input2)
lstm_layer = LSTM(200, go_backwards=False)
embd_list = [sq_embd_1,sq_emb2]
lstm_embd = []
for embd in embd_list :
lstm_embd.append(lstm_layer(embd))
time_inpt = Input(shape=(1,), name='time_inpt')
score_inpt = Input(shape=(1,), name='score_inpt')
state_embed = Concatenate()([lstm_embed, time_inpt, score_inpt])
### Then continue the rest of the network

或者,您可以使用扩展的dim作为输入的单个数组(例如,第一个dim索引序列号),脚本将被修改如下:

sq_list = Input(shape=(self.num_seq,self.MAX_SEQ_LEN, 1), name='sq_list')
embd_list = []
for i in range(sq_list.shape[0]):
embd_list.append(embd(sq_list[i])
lstm_layer = LSTM(200, go_backwards=False)
lstm_embd = []
for embd in embd_list :
lstm_embd.append(lstm_layer(embd))
time_inpt = Input(shape=(1,), name='time_inpt')
score_inpt = Input(shape=(1,), name='score_inpt')
state_embed = Concatenate()([lstm_embed, time_inpt, score_inpt])
### Then continue the rest of the network

最新更新