Keras中的状态自动编码器



我正在尝试创建一个有状态的自动编码器模型。目标是使自动编码器对每个时间序列都是有状态的。数据由10个时间序列组成,每个时间序列的长度为567。

timeseries#1: 451, 318, 404, 199, 225, 158, 357, 298, 339, 155, 135, 239, 306, ....
timeseries#2: 304, 274, 150, 143, 391, 357, 278, 557, 98, 106, 305, 288, 325, ....
...
timeseries#10: 208, 138, 201, 342, 280, 282, 280, 140, 124, 261, 193, .....

我的回头率是28。因此,我生成了以下28个时间步长的序列:

[451, 318, 404, 199, 225, 158, 357, 298, 339, 155, 135, 239, 306, .... ]
[318, 404, 199, 225, 158, 357, 298, 339, 155, 135, 239, 306, 56, ....]
[404, 199, 225, 158, 357, 298, 339, 155, 135, 239, 306, 56, 890, ....]
...
[304, 274, 150, 143, 391, 357, 278, 557, 98, 106, 305, 288, 325, ....]
[274, 150, 143, 391, 357, 278, 557, 98, 106, 305, 288, 325, 127, ....]
[150, 143, 391, 357, 278, 557, 98, 106, 305, 288, 325, 127, 798, ....]
...
[208, 138, 201, 342, 280, 282, 280, 140, 124, 261, 193, .....]
[138, 201, 342, 280, 282, 280, 140, 124, 261, 193, 854, .....]

这给了我每个时间序列539个序列。我需要做的是使LSTM对每个时间序列都是有状态的,并在看到时间序列中的所有序列后重置状态。这是我的代码:

batch_size = 35  #(total Number of samples is 5390, and it is dividable by 35)
timesteps = 28
n_features = 1
hunits = 14
RepeatVector(timesteps/hunits = 2)
epochs = 1000

inputEncoder = Input(batch_shape=(35, 28, 1), name='inputEncoder')
outEncoder, c, h = LSTM(14, stateful=True, return_state=True, name='outputEncoder')(inputEncoder)
encoder_model = Model(inputEncoder, outEncoder)
context = RepeatVector(2, name='inputDecoder')(outEncoder)
context_reshaped = Reshape(28, 1), name='ReshapeLayer')(context)
outDecoder = LSTM(1, return_sequences=True, stateful=True, name='decoderLSTM')(context_reshaped)
autoencoder = Model(inputEncoder, outDecoder)
autoencoder.compile(loss='mse', optimizer='rmsprop')
for i in range(epochs):
history = autoencoder.fit(data, data,
validation_split=config['validation_split_ratio'],
shuffle=False,
batch_size=35,
epochs=1,
)   
autoencoder.reset_states()

2个问题:

1-我在第一个历元完成后收到这个错误,我想知道它是如何发生的:

ValueError: Cannot feed value of shape (6, 28, 1) for Tensor u'inputEncoder:0', which has shape '(35, 28, 1)'

2-我不认为那个模型能像我想要的那样起作用。在这里,它将在所有批次(一个历元(之后重置状态,这意味着在所有时间序列处理之后。我应该如何在时间序列之间将其更改为有状态?

问题来自validation_split速率!!它被设置为0.33%,当发生分裂时,它试图在3611个不能被我的batch_size=35整除的数据样本上进行训练。根据这篇文章,我可以找到合适的号码,从那篇文章中复制:

def quantize_validation_split(validation_split, sample_count, batch_size):
batch_count = sample_count / batch_size
return float(int(batch_count * validation_split)) / batch_count

那么您可以呼叫model.fit(..., validation_split=fix_validation_split(0.05, len(X), batch_size))。但是如果keras在fit((里面为你做这件事,那就太酷了。

此外,关于以我需要的方式使自动编码器有状态:在每个epoch的末尾不应该有reset_state

相关内容

  • 没有找到相关文章

最新更新