我试图使用张量流(带有张量层(制作图像分类器,我遇到了几个问题。其中之一是我无法获得张量的值,用于网络中 dropout 层的调试用法。
网络如下所示:
sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32, shape=[None, 100, 100, 3], name='x')
y_ = tf.placeholder(tf.float32, shape=[None, ], name='y_')
w_init = tf.truncated_normal_initializer(stddev=0.01)
b_init = tf.constant_initializer(value=0.0)
inputs = tl.layers.InputLayer(x, name='inputs')
conv1 = tl.layers.Conv2d(inputs, 64, (5, 5), act=tf.nn.relu, padding='SAME', W_init=w_init, b_init=b_init, name='conv1')
#(?, 96, 96, 64)
pool1 = tl.layers.MaxPool2d(conv1, (2, 2), padding='SAME', name='pool1')
#(?, 48, 48, 64)
conv2 = tl.layers.Conv2d(pool1, 128, (5, 5), act=tf.nn.relu, padding='SAME', W_init=w_init, b_init=b_init, name='conv2')
#(?, 44, 44, 64)
pool2 = tl.layers.MaxPool2d(conv2, (2, 2), padding='SAME', name='pool2')
#(?, 22, 22, 128)
conv3 = tl.layers.Conv2d(pool2, 256, (5, 5), act=tf.nn.relu, padding='SAME', W_init=w_init, b_init=b_init, name='conv3')
#(?, 18, 18, 256)
pool3 = tl.layers.MaxPool2d(conv3, (2, 2), padding='SAME', name='pool3')
#(?, 9, 9, 256)
flatten = tl.layers.FlattenLayer(pool3, name='flatten_layer')
#(?, 20736)
drop1 = tl.layers.DropoutLayer(flatten, keep=0.8, name='drop1')
#(?, 20736)
relu2 = tl.layers.DenseLayer(drop1, n_units=512, act = tf.nn.relu, name='relu2')
#(?, 512)
drop3 = tl.layers.DropoutLayer(relu2, keep=0.5, name='drop3')
#(?, 512)
network = tl.layers.DenseLayer(drop3, n_units=1, act = tf.identity, name='output_layer')
#(?, 2)
y = network.outputs
correct_prediction = tf.equal(tf.reduce_sum(y, 1), y_)
我想找到correct_prediction
是什么,所以我运行会话时输入到网络中。
print(sess.run(correct_prediction, feed_dict={
x: np.asarray(X_train[0:10],dtype=np.float32),
y_: np.asarray(Y_train[0:10],dtype=np.float32),
drop1:0.8,
drop3:0.5
}))
但是在那之后我得到了错误:
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype float
通过跟踪堆栈,我发现是 dropout 层导致了问题:
......
Caused by op 'Placeholder', defined at:
File "run_linear_output.py", line 33, in <module>
drop1 = tl.layers.DropoutLayer(flatten, keep=0.8, name='drop1')
File "/home/neoclub/wanghong/env/lib/python3.5/site-packages/tensorlayer/layers.py", line 963, in __init__
set_keep[name] = tf.placeholder(tf.float32)
File "/home/neoclub/wanghong/env/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 1507, in placeholder
name=name)
......
从我的网络中删除这些层后,我可以成功获得结果。
......
[False False False False False False False False False False]
......
显然这是错误的(编辑:因为我忘了在这里舍入输出(,但这是另一个问题。
之后,我注意到错误消息中有 set_keep[name] = tf.placeholder(tf.float32)
,所以我像这样更改了我的提要
print(sess.run(correct_prediction, feed_dict={
x: np.asarray(X_train[0:10],dtype=np.float32),
y_: np.asarray(Y_train[0:10],dtype=np.float32),
drop1:0.8,
drop3:0.5
}))
但是我得到了另一个错误:(
Traceback (most recent call last):
File "run_linear_output.py", line 79, in <module>
print(sess.run(correct_prediction, feed_dict={x: np.asarray(X_train[0:10],dtype=np.float32), y_: np.asarray(Y_train[0:10],dtype=np.float32), drop1:0.8, drop3:0.5}))
File "/home/neoclub/wanghong/env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/home/neoclub/wanghong/env/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 933, in _run
+ e.args[0])
TypeError: Cannot interpret feed_dict key as Tensor: Can not convert a DropoutLayer into a Tensor.
我在这里完全困惑。如何正确使用辍学层?我应该提供什么内容才能使会话可运行?
源用于将数据放入图表中,通常通过占位符。在您的馈送字典中,您正在尝试输入 drop1 值 0.8,但 drop1 = 您的辍学层。我认为您要做的是创建两个占位符,这些占位符将是您的辍学率。
像这样:
drop_keep_1=tf.placeholder(tf.float32,None,name='drop_keep_1')
drop1=tl.layers.DropoutLayer(flatten, keep=drop_keep_1, name='drop1')
我注意到你没有使用Tensorflow Layers API,而是使用tl.layers(我从未听说过(,但我想问题仍然是一样的。
有两种方法可以在TensorLayer中使用DropoutLayer:
1( 使用由 TensorLayer 创建的内部保持概率占位符,请参阅tutorial_mlp_dropout1.py
2(我们没有使用占位符来控制保持概率,而是构建了两个用于训练和测试的图,请参阅 tutorial_mlp_dropout2.py
network = tl.layers.DropoutLayer(network, keep = 0.5, is_fix = True, name = 'drop1')
添加is_fix = True
是重点