在全局变量初始化期间发生缺少占位符值崩溃?



On tensorflow 1.4.我收到you need a value for placeholder tensor...错误。问题是我正在喂这个张量

feats = np.reshape(feats, (-1, var1, feat_dim, 1))
_, outlogits = sess.run([train_step, logits], feed_dict={inp_layer: feats,
targs: targets,
eta: 1e-4})

(通常我想在图形内部重塑,但出于调试目的,我将其删除)

占 位 符:

inp_layer = tf.placeholder(tf.float32, shape=[None, var1, feat_dim, 1])

错误说:You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [?,66,200,1]

当我运行sess.run(tf.global_variables_initializer())时会发生此错误,所以它甚至还没有到达应该开始考虑占位符的部分,但它却在抱怨它们?!

我认为这可能与我的图层大小之一取决于占位符的事实有关。(不过我有validate_shape=False重量)。将添加更多代码。

编辑:失败的示例代码,指出我认为问题来自哪里(记住代码在全局变量 init 上失败):

!edit2:是的,问题是那条线。那么问题就变成了我怎样才能有一个图表,其中一个权重的维度(以及输出)是动态的。

train_feats = '..'
train_json = '..'
feat_dim = 200
var1 = 20
batch_size = 64
inp_layer = tf.placeholder(tf.float32, shape=[None, var1, feat_dim, 1])
targs = tf.placeholder(tf.int64, shape=[None])
eta = tf.placeholder(tf.float32)
chunk_size = 3
w1 = init_weight([chunk_size, feat_dim, 1, 32])
b1 = tf.zeros([32])
a1 = conv_layer(inp_layer, w1, b1, stride=3, padding='VALID')
chunk_size = tf.shape(a1)[1]  <==== # ! IS THE PROBLEM !
w5 = init_weight([chunk_size, 1, 32, 12])
b5 = tf.zeros([12])
a5 = conv_layer(a1, w5, b5, stride=1, padding='VALID', act=False)
logits_ = tf.reshape(a5, [-1, 12])
softmax = tf.nn.softmax(logits_)
cross_ent = tf.reduce_sum(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=targs,
      logits=logits_))
train_step = tf.train.AdamOptimizer(eta).minimize(cross_ent)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for feats, targets in batch_gen(train_feats, train_json, var1, feat_dim):
feats = np.reshape(feats, (var1, var1, feat_dim, 1))
sess.run(train_step, feed_dict={inp_layer: bla,
targs: targets,
eta: 1e-4})
def init_weight(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.01), validate_shape=False)

def conv_layer(x, w, b, stride, padding, act=True):
# striding over the features
if act:
return tf.nn.elu(tf.nn.conv2d(x, w, [1, stride, 1, 1], padding) + b)
else:
return tf.nn.conv2d(x, w, [1, stride, 1, 1], padding) + b

chunk_size = tf.shape(a1)[1]

tf.shape提取a1的运行时形状,而不是图定义时已知的静态形状。由于a1inp_layerw1之间卷积的结果,因此当您引用a1时,您还需要解析inp_layer。由于inp_layer是占位符,因此会出现错误。

由于您对图定义时已知的a1的第二个维度感兴趣,因此只需使用:

chunk_size = a1.shape[1].value

以提取正确的维度值。

最新更新