Tensorflow渐变descentOptimTimizer不更新变量



我是机器学习的新手。我从具有SoftMax和梯度下降的分类MNIST手写图像的最简单示例开始。通过引用其他一些示例,我提出了下面的逻辑回归:

import tensorflow as tf
import numpy as np

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = np.float32(x_train / 255.0)
x_test = np.float32(x_test / 255.0)
X = tf.placeholder(tf.float32, [None, 28, 28])
Y = tf.placeholder(tf.uint8, [100])
XX = tf.reshape(X, [-1, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
def err(x, y):
    predictions = tf.matmul(x, W) + b
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=tf.reshape(y, [-1, 1]), logits=predictions))
    # value = tf.reduce_mean(y * tf.log(predictions))
    # loss = -tf.reduce_mean(tf.one_hot(y, 10) * tf.log(predictions)) * 100.
    return loss
# cost = err(np.reshape(x_train[:100], (-1, 784)), y_train[:100])
cost = err(tf.reshape(X, (-1, 784)), Y)
optimizer = tf.train.GradientDescentOptimizer(0.005).minimize(cost)

init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)

# temp = sess.run(tf.matmul(XX, W) + b, feed_dict={X: x_train[:100]})
temp = sess.run(cost, feed_dict={X: x_train[:100], Y: y_train[:100]})
print(temp)
# print(temp.dtype)
# print(type(temp))
for i in range(100):
    sess.run(optimizer, feed_dict={X: x_train[i * 100: 100 * (i + 1)], Y: y_train[i * 100: 100 * (i + 1)]})
    # sess.run(optimizer, feed_dict={X: x_train[: 100], Y: y_train[:100]})
temp = sess.run(cost, feed_dict={X: x_train[:100], Y: y_train[:100]})
print(temp)

sess.close()

我试图运行优化器一些迭代,并使用火车图像数据和标签馈送数据。在我的理解中,在优化器运行期间," W"one_answers" B"的变量应进行更新,以便模型在培训之前和之后产生不同的结果。但是有了此代码,Optimizer运行之前和之后模型的印刷成本是相同的。实现这一目标有什么问题?

您正在用零开始初始化权重矩阵W,因此,所有参数在每个权重更新时都会获得相同的梯度值。对于权重初始化,请使用tf.truncated_normal()tf.random_normal()tf.contrib.layers.xavier_initializer()或其他东西,而不是零。

这是一个类似的问题。

最新更新