Seq2Seq in TensorFlow 中没有嵌入



我正在尝试创建一个非常基本的多变量时间序列自动编码器。 我希望能够重建我传入的两个信号。

我正在查看的大多数参考资料都使用旧版本的 API 或使用嵌入。 我正在尝试使用最新的更高级别 API,但如何将它们拼凑在一起并不明显。

class Seq2SeqConfig():
def __init__(self):
self.sequence_length = 15 # num of time steps
self.hidden_units = 64 # ?
self.num_features = 2
self.batch_size = 10 
# Expect input as batch major. 
encoder_inputs = tf.placeholder(shape=(None, config.sequence_length, config.num_features), dtype=tf.float32)
decoder_inputs = tf.placeholder(shape=(None, config.sequence_length, config.num_features), dtype=tf.float32))
# Convert inputs to time major
encoder_inputs_tm = tf.transpose(encoder_inputs, [1, 0, 2])
decoder_inputs_tm = tf.transpose(decoder_inputs, [1, 0, 2])
# setup encoder
encoder_cell = tf.contrib.rnn.LSTMCell(config.hidden_units)
encoder_outputs, encoder_final_state = tf.nn.dynamic_rnn(
cell=encoder_cell,
inputs=encoder_inputs_tm,
time_major=True)
# setup decoder
decoder_cell = tf.contrib.rnn.LSTMCell(config.hidden_units)
# The sequence length is mandatory. Not sure what the expectation is here? 
helper = tf.contrib.seq2seq.TrainingHelper(
decoder_inputs_tm, 
sequence_length=tf.constant(config.sequence_length, dtype=tf.int32, shape=[config.batch_size]),
time_major=True)
decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, helper, encoder_final_state)
decoder_outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder, output_time_major=True)
# loss calculation
loss_op = tf.reduce_mean(tf.square(decoder_outputs.rnn - decoder_targets_tm)

损失操作失败,因为形状不同。decoder_targets is (?, 15, 2) and decoder_outputs.rnn is (?, ?, 64).

问题1:我是否在某个地方错过了重塑解码器输出的操作? 我松散地遵循了这个张量流教程:https://www.tensorflow.org/tutorials/seq2seq

有一个projection_layer操作传递到基本解码器中。这就是这样做的目的吗?projection_layer = layers_core.Dense(tgt_vocab_size, use_bias=False)

我没有看到layers_core。Dense() 函数在任何地方。我假设它已弃用或内部。

问题 2:不使用嵌入时,使用哪个助手进行推理?

问题3:隐藏单元的理想尺寸是多少? 我假设因为我们想减少潜在空间中的维度,所以它需要小于输入的大小。当您的输入序列长度 = 15 且特征数 = 2 时,如何转换。

隐藏单位的数量应该<15、<2还是<15*2?

找出问题 1的答案

from tensorflow.python.layers.core import Dense
output_layer = Dense(config.num_features)
decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, helper, encoder_final_state, output_layer)

参考: https://github.com/udacity/deep-learning/blob/master/seq2seq/sequence_to_sequence_implementation.ipynb

另外两个问题仍然存在。

关于问题 3:我建议您使用不同的超参数运行多个训练和验证周期,以找到最适合您的数据和要求的方法。你可以在这里看看我的实现(https://github.com/fritzfitzpatrick/tensorflow-seq2seq-generic-example),我构建了一个非常简单的训练和验证循环,一旦验证损失在多个周期内没有下降,它就会停止,以防止过度拟合。

关于问题2:我目前仍在开发自定义助手实现,看起来它正在走向某个地方。您可以在此处找到完整的示例代码 (https://github.com/fritzfitzpatrick/tensorflow-seq2seq-generic-example/blob/master/tensorflow_custom_helper.ipynb)。

batch_size = 5
features_dec_inp = 2 # number of features in target sequence
go_token = 2
end_token = 3
sess = tf.InteractiveSession()
def initialize_fn():
finished = tf.tile([False], [batch_size])
start_inputs = tf.fill([batch_size, features_dec_inp], go_token)
return (finished, start_inputs)
def next_inputs_fn(time, outputs, state, sample_ids):
del time, sample_ids
# finished needs to update after last step.
# one could use conditional logic based on sequence length
# if sequence length is known in advance
finished = tf.tile([False], [batch_size])
# next inputs should be the output of the dense layer
# unless the above finished logic returns [True]
# in which case next inputs can be anything in the right shape
next_inputs = tf.fill([batch_size, features_dec_inp], 0.5)
return (finished, next_inputs, state)
helper = tf.contrib.seq2seq.CustomHelper(
initialize_fn = initialize_fn,
sample_fn = tf.identity,                      
next_inputs_fn = next_inputs_fn)
print(helper)

关于问题 1:这是我用来将解码器输出的维度降低到目标序列中特征数量的代码:

train_output_dense = tf.layers.dense(
train_dec_out_logits, # [batch_size x seq_length x num_units]
features_dec_exp_out) # [batch_size x seq_length x num_target_features]

相关内容

  • 没有找到相关文章

最新更新