由于 tf.占位符出现问题而无法运行谓词



抱歉,我是Tensorflow的新手。我正在开发一个简单的onelayer_perceptron脚本,只需获取初始化参数即可使用 Tensorflow 训练神经网络:

我的编译器抱怨:

您必须使用 dtype 浮点数为占位符张量"输入"输入值

错误发生在此处:

input_tensor = tf.placeholder(tf.float32,[None, n_input],name="input")

请看看我到目前为止做了什么:

1) 我初始化我的输入值

n_input = 10  # Number of input neurons
n_hidden_1 = 10  # Number of hidden layers
n_classes = 3  # Out layers
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_classes]))
}

2) 初始化占位符:

input_tensor = tf.placeholder(tf.float32, [None, n_input], name="input")
output_tensor = tf.placeholder(tf.float32, [None, n_classes], name="output")

3) 培训神经网络

# Construct model
prediction = onelayer_perceptron(input_tensor, weights, biases)
init = tf.global_variables_initializer() 

4)这是我onelayer_perceptron函数,它只执行典型的NN计算矩阵层和权重,使用sigmoid添加偏差和激活

def onelayer_perceptron(input_tensor, weights, biases):
layer_1_multiplication = tf.matmul(input_tensor, weights['h1'])
layer_1_addition = tf.add(layer_1_multiplication, biases['b1'])
layer_1_activation = tf.nn.sigmoid(layer_1_addition)
out_layer_multiplication = tf.matmul(layer_1_activation, weights['out'])
out_layer_addition = out_layer_multiplication + biases['out']
return out_layer_addition

5) 运行我的脚本

with tf.Session() as sess:
sess.run(init)
i = sess.run(input_tensor)
print(i)

您没有将输入提供给占位符;您可以使用feed_dict执行此操作。

你应该做类似的事情:

out = session.run(Tensor(s)_you_want_to_evaluate, feed_dict={input_tensor: input of size [batch_size,n_input], output_tensor: output of size [batch size, classes] })

最新更新