我在Ubuntu 16.04上运行TensorFlow版本1.3.0。在运行代码时,当代码第一次运行时,一切似乎都很好。但是,在第二次运行代码时,我会收到此错误:
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [?,784]
[[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[?,784], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
这是追溯:
InvalidArgumentError Traceback (most recent call last)
<ipython-input-26-149c9b9d8878> in <module>()
11 sess.run(optimizer, feed_dict={x: batch_xs, y:
batch_ys})
12 avg_cost += sess.run(cost_function, feed_dict={x:
batch_xs, y: batch_ys})/total_batch
---> 13 summary_str = sess.run(merged_summary_op,
feed_dict={x: batch_xs, y: batch_ys})
14 summary_writer.add_summary(summary_str,
iteration*total_batch + i)
15 if iteration % display_step == 0:
/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in run(self, fetches,
feed_dict, options, run_metadata)
893 try:
894 result = self._run(None, fetches, feed_dict, options_ptr,
--> 895 run_metadata_ptr)
896 if run_metadata:
897 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
InvalidArgumentError Traceback (most recent call
last)
<ipython-input-26-149c9b9d8878> in <module>()
11 sess.run(optimizer, feed_dict={x: batch_xs, y:
batch_ys})
12 avg_cost += sess.run(cost_function, feed_dict={x:
batch_xs, y: batch_ys})/total_batch
---> 13 summary_str = sess.run(merged_summary_op,
feed_dict={x: batch_xs, y: batch_ys})
14 summary_writer.add_summary(summary_str,
iteration*total_batch + i)
15 if iteration % display_step == 0:
/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in run(self, fetches,
feed_dict, options, run_metadata)
893 try:
894 result = self._run(None, fetches, feed_dict, options_ptr,
--> 895 run_metadata_ptr)
896 if run_metadata:
897 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in _run(self, handle,
fetches, feed_dict, options, run_metadata)
1122 if final_fetches or final_targets or (handle and
feed_dict_tensor):
1123 results = self._do_run(handle, final_targets,
final_fetches,
-> 1124 feed_dict_tensor, options,
run_metadata)
1125 else:
1126 results = []
/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in _do_run(self, handle,
target_list, fetch_list, feed_dict, options, run_metadata)
1319 if handle is None:
1320 return self._do_call(_run_fn, self._session, feeds,
fetches, targets,
-> 1321 options, run_metadata)
1322 else:
1323 return self._do_call(_prun_fn, self._session, handle,
feeds, fetches)
/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in _do_call(self, fn,
*args)
1338 except KeyError:
1339 pass
-> 1340 raise type(e)(node_def, op, message)
1341
1342 def _extend_graph(self):
这是代码:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/home/niraj/Documents/artificial
intelligence/projects/tensorboard", one_hot=True)
learning_rate = 0.01
training_iteration = 200
batch_size = 100
display_step = 2
# TF graph input
x = tf.placeholder('float32', [None, 784]) # mnist data image of shape
28*28=784
y = tf.placeholder('float32',[None, 10]) # 0-9 digits recognition =>
10 classes
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
with tf.name_scope("Wx_b") as scope:
model = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
w_h = tf.summary.histogram("weights", W)
b_h = tf.summary.histogram("biases", b)
with tf.name_scope("cost_function") as scope:
cost_function = -tf.reduce_sum(y*tf.log(model))
tf.summary.scalar("cost_function", cost_function)
with tf.name_scope("train") as scope:
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_function)
init = tf.global_variables_initializer()
merged_summary_op = tf.summary.merge_all()
with tf.Session() as sess:
sess.run(init)
summary_writer = tf.summary.FileWriter('/home/niraj/Documents/artificial intelligence/projects/tensorboard', graph=sess.graph)
for iteration in range(training_iteration):
avg_cost = 0
total_batch = int(mnist.train.num_examples/batch_size)
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys})
avg_cost += sess.run(cost_function, feed_dict={x: batch_xs, y: batch_ys})/total_batch
summary_str = sess.run(merged_summary_op, feed_dict={x: batch_xs, y: batch_ys})
summary_writer.add_summary(summary_str, iteration*total_batch + i)
if iteration % display_step == 0:
print "Iteration:", '%04d' % (iteration + 1), "cost=", "{:.9f}".format(avg_cost)
print "Tuning completed!"
predictions = tf.equal(tf.argmax(model, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(predictions, "float"))
print "Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels})
提醒您,当我第一次运行它时,该代码都可以很好地工作。在第二次运行中,它会出现错误。但是,当我关闭笔记本电脑和我的Jupyter终端时,然后重新打开并再次运行它,它将再次运行没有任何错误,并且在第二次运行时会出现上述错误。
我遇到了同样的问题,到目前为止,当我删除摘要操作时不会发生错误。如果我找到一种方法来处理摘要...
,我将更新此信息。更新:
我通过在此处遵循建议来解决此问题:运行合并的摘要OP
时,占位符的提要值错误我用tf.summary.merge([summary_var1, summary_var2])
tf.summary.merge_all
解决此问题的一种更简单的方法是在循环结束时致电tf.reset_default_graph()
,然后再开始训练。