在我的代码中,我正在尝试练习使用tr.train.batch函数。在sess.run([optimizer])
行中,它不返回任何内容,只是冻结。你能找到我的错误吗?
tensors = tf.convert_to_tensor(x_train, dtype=tf.float32)
tensors = tf.reshape(tensors, shape=x_train.shape)
batch = tf.train.batch([tensors], batch_size=BATCH_SIZE, enqueue_many=True)
# Weights and biases to hidden layer
Wh = tf.Variable(tf.random_normal([COLUMN-2, UNITS_OF_HIDDEN_LAYER], mean=0.0, stddev=0.05))
bh = tf.Variable(tf.zeros([UNITS_OF_HIDDEN_LAYER]))
h = tf.nn.tanh(tf.matmul(batch, Wh) + bh)
# Weights and biases to output layer
Wo = tf.transpose(Wh) # tied weights
bo = tf.Variable(tf.zeros([COLUMN-2]))
y = tf.nn.tanh(tf.matmul(h, Wo) + bo)
# Objective functions
mean_sqr = tf.reduce_mean(tf.pow(batch - y, 2))
optimizer = tf.train.AdamOptimizer(LEARNING_RATE).minimize(mean_sqr)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for j in range(TRAINING_EPOCHS):
sess.run([optimizer])
print("optimizer: ")
tf.train.batch
是一个队列,因此您需要使用tf.train.start_queue_runners
启动会话中的队列。你可以在tensorflow的线程和队列指南中了解这一点。
进行以下更改:
with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
try:
# Training loop
for j in range(TRAINING_EPOCHS):
if coord.should_stop():
break
sess.run([optimizer])
print("optimizer: ")
except Exception, e:
# When done, ask the threads to stop.
coord.request_stop(e)
finally:
coord.request_stop()
# Wait for threads to finish.
coord.join(threads)