TensorFlow神经网络接线错误,但缺少什么



我创建了一个非常简单的TensorFlow神经网络,但很明显,我一定在某个地方跳过了一步,或者混淆了不同教程中的示例代码,因为结果是荒谬的,并且训练误差只会随着每个历元的增加而增加。

这里有一个完全独立的例子(MVCE(,试图训练网络来计算平方函数:

import tensorflow as tf
import numpy as np
# hard-coded input and labels for demonstration
training_x = np.array([[1.], [2.],[3.],[4.],[5.]]).T
labels_training = np.array([[1.],[4.],[9.],[16.],[25.]]).T
# Hyperparameters
num_epochs = 1000
learning_rate = 0.001
LAYERS = 3
# setup the Neural Network
INPUT = len(training_x)
OUTPUT = len(labels_training)
X = tf.placeholder(tf.float32, shape=[INPUT,None])
Y = tf.placeholder(tf.float32, shape=[OUTPUT, None])
parameters = {
'W1': tf.Variable(np.random.randn(LAYERS,INPUT), dtype=tf.float32),
'b1': tf.Variable(np.zeros([LAYERS,1]), dtype=tf.float32),
'W2': tf.Variable(np.random.randn(OUTPUT,LAYERS), dtype=tf.float32),
'b2': tf.Variable(np.zeros([OUTPUT,1]), dtype=tf.float32)
}
Z1 = tf.add(tf.matmul(parameters['W1'], X), parameters['b1']) # W1*X + b
A2 = tf.nn.relu(Z1)
Z2 = tf.add(tf.matmul(parameters['W2'], A2), parameters['b2']) 
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z2,  labels=Y)) 
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(num_epochs):
_ , c = sess.run([optimizer, cost], feed_dict={X: training_x, Y: labels_training}) 
if epoch % 200 == 0:
print ("Cost after epoch %i: %f" % (epoch, c))
# Test predictions by computing the output using training set as input 
output = sess.run(Z2, feed_dict={X: training_x})
print(np.array2string(output, precision=3))

示例输出(由于随机初始化矢量导致的YMMV(:

Cost after epoch 0: 158.512558
Cost after epoch 200: 227.178513
Cost after epoch 400: 319.617218
Cost after epoch 600: 436.471069
Cost after epoch 800: 577.651733
[[23.437 38.291 53.145 67.999 82.852]]

我试过你的代码,我认为你应该更改成本函数。如果我将其更改为cost = tf.reduce_mean(tf.losses.mean_squared_error(labels = Y, predictions = Z2)),则效果会更好

编辑:当我没有转换您的输入和输出数据时,它会在200个时期内将成本降低到0。

我认为这是因为

Z1 = tf.add(tf.matmul(parameters['W1'], X), parameters['b1'])

应该是

Z1 = tf.add(tf.matmul( X,parameters['W1']), parameters['b1'])

Z2 也是如此

找到了关于此SO Post 的解释

最新更新