如何使用 Tensorflow 的数据集 API 迭代器作为(递归)神经网络的输入?



使用TensorFlow的数据集API迭代器时,我的目标是定义在迭代器的get_next()张量上操作的RNN作为输入(请参阅代码中的(1))。

>> >

但是,只需用get_next()定义dynamic_rnn作为输入会导致错误:ValueError: Initializer for variable rnn/basic_lstm_cell/kernel/ is from inside a control-flow construct, such as a loop or conditional. When creating a variable inside a loop or conditional, use a lambda as the initializer.

现在,我知道一个解决方法是简单地为next_batch创建一个占位符,然后是eval() Tensor(因为您无法传递张量)并使用feed_dict(在代码中请参见X(2))。但是,如果我正确理解它,那么这不是一个有效的解决方案,因为我们首先评估然后重新初始化张量。

有什么方法可以:

  1. 直接在迭代器的输出顶部定义dynamic_rnn;

或:

  1. 某种程度上直接将现有的get_next()张量传递给占位符,这是dynamic_rnn的输入?

完整的工作示例;(1)版本是我想要的,但事实并非如此,而(2)是有效的解决方法。

import tensorflow as tf
from tensorflow.contrib.rnn import BasicLSTMCell
from tensorflow.python.data import Iterator
data = [ [[1], [2], [3]], [[4], [5], [6]], [[1], [2], [3]] ]
dataset = tf.data.Dataset.from_tensor_slices(data)
dataset = dataset.batch(2)
iterator = Iterator.from_structure(dataset.output_types,
                                   dataset.output_shapes)
next_batch = iterator.get_next()
iterator_init = iterator.make_initializer(dataset)
# (2):
X = tf.placeholder(tf.float32, shape=(None, 3, 1))
cell = BasicLSTMCell(num_units=8)
# (1):
# outputs, states = lstm_outputs, lstm_states = tf.nn.dynamic_rnn(cell, next_batch, dtype=tf.float32)
# (2):
outputs, states = lstm_outputs, lstm_states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    sess.run(iterator_init)
    # (1):
    # o, s = sess.run([outputs, states])
    # o, s = sess.run([outputs, states])
    # (2):
    o, s = sess.run([outputs, states], feed_dict={X: next_batch.eval()})
    o, s = sess.run([outputs, states], feed_dict={X: next_batch.eval()})

(使用Tensorflow 1.4.0,Python 3.6。)

非常感谢:)

事实证明,神秘错误可能是Tensorflow中的错误,请参见https://github.com/tensorflow/tensorflow/tensorflow/issues/14729。更具体地说,该错误确实来自馈送错误的数据类型(在上面的我的示例中,data数组包含int32值,但应包含浮子)。

而不是获得ValueError: Initializer for variable rnn/basic_lstm_cell/kernel/ is from inside a control-flow construct错误,
TensorFlow应该返回:
TypeError: Tensors in list passed to 'values' of 'ConcatV2' Op have types [int32, float32] that don't all match.(请参阅1)。

要解决此问题,只需更改
data = [ [[1], [2], [3]], [[4], [5], [6]], [[1], [2], [3]] ]

data = np.array([[ [1], [2], [3]], [[4], [5], [6]], [[1], [2], [3]] ], dtype=np.float32)

然后以下代码应正常工作:

import tensorflow as tf
import numpy as np
from tensorflow.contrib.rnn import BasicLSTMCell
from tensorflow.python.data import Iterator
data = np.array([[ [1], [2], [3]], [[4], [5], [6]], [[1], [2], [3]] ], dtype=np.float32)
dataset = tf.data.Dataset.from_tensor_slices(data)
dataset = dataset.batch(2)
iterator = Iterator.from_structure(dataset.output_types,
                                   dataset.output_shapes)
next_batch = iterator.get_next()
iterator_init = iterator.make_initializer(dataset)
# (2):
# X = tf.placeholder(tf.float32, shape=(None, 3, 1))
cell = BasicLSTMCell(num_units=8)
# (1):
outputs, states = lstm_outputs, lstm_states = tf.nn.dynamic_rnn(cell, next_batch, dtype=tf.float32)
# (2):
# outputs, states = lstm_outputs, lstm_states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    sess.run(iterator_init)
    # (1):
    o, s = sess.run([outputs, states])
    o, s = sess.run([outputs, states])
    # (2):
    # o, s = sess.run([outputs, states], feed_dict={X: next_batch.eval()})
    # o, s = sess.run([outputs, states], feed_dict={X: next_batch.eval()})

最新更新