TensorFlow:无效计算



我是新的tensorflow,并尝试了解计算图的工作原理。我正在Tensorflow网站上的非常基本的线性回归示例。我有以下代码:

import numpy as np
import tensorflow as tf

def manual_loss(_w, _b, _x, _y):
    _loss = 0.0
    n = len(_x)
    for j in range(n):
       _loss += (_w * _x[j] + _b - _y[j]) ** 2
    return _loss

def manual_grads(_w, _b, _x, _y):
     n = len(_x)
     g_w = 0.0
     g_b = 0
     for j in range(n):
         g_w += 2.0 * (_w * _x[j] + _b - _y[j]) * _x[j]
         g_b += 2.0 * (_w * _x[j] + _b - _y[j])
     return g_w, g_b

# Model parameters
W = tf.Variable([0.3], dtype=tf.float32)
b = tf.Variable([-0.3], dtype=tf.float32)
_W = 0.3
_b = -0.3
# Model input and output
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
# loss
loss = tf.reduce_sum(tf.square(linear_model - y))  # sum of the squares
grads = tf.gradients(loss, [W, b])
# training data
x_train = [1, 2, 3, 4]
y_train = [0, -1, -2, -3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)  
lr = 0.001
for i in range(1000):
    results = sess.run([loss, W, b, grads], {x: x_train, y: y_train})
    loss_value = results[0]
    W_value = results[1]
    b_value = results[2]
    grad_W = results[3][0]
    grad_b = results[3][1]
    manual_loss_value = manual_loss(_w=_W, _b=_b, _x=x_train, _y=y_train)
    manual_grad_W, manual_grad_b = manual_grads(_w=_W, _b=_b, _x=x_train, _y=y_train)
    new_W_value = W_value - lr * grad_W
    new_b_value = b_value - lr * grad_b
    W = tf.assign(W, value=new_W_value)
    b = tf.assign(b, value=new_b_value)
    print("***********************")
    print("loss={0}".format(loss_value))
    print("manual_loss_value={0}".format(manual_loss_value))
    print("W={0}".format(W_value))
    print("b={0}".format(b_value))
    print("manual_W={0}".format(_W))
    print("manual_b={0}".format(_b))
    print("grad_W={0}".format(grad_W))
    print("grad_b={0}".format(grad_b))
    print("manual_grad_W={0}".format(manual_grad_W))
    print("manual_grad_b={0}".format(manual_grad_b))
    print("***********************")
    _W -= lr * manual_grad_W
    _b -= lr * manual_grad_b

我只是尝试将梯度下降应用于简单的(W*x -b -y(^2模型。我没有故意使用TensorFlow自己的优化器,我想了解基础图形更新机制。为了检查系统是否计算正确的梯度,我也为线性回归实现了自己的损失和梯度计算功能。不幸的是,TensorFlow似乎没有计算损耗函数和梯度。这是我作为输出所得到的:

 ***********************
 loss=23.65999984741211
 manual_loss_value=23.659999999999997
 W=[ 0.30000001]
 b=[-0.30000001]
 manual_W=0.3
 manual_b=-0.3
 grad_W=[ 52.]
 grad_b=[ 15.59999943]
 manual_grad_W=52.0
 manual_grad_b=15.599999999999998
 ***********************
 ***********************
 loss=23.65999984741211
 manual_loss_value=20.81095744
 W=[ 0.24800001]
 b=[-0.31560001]
 manual_W=0.248
 manual_b=-0.3156
 grad_W=[ 52.]
 grad_b=[ 15.59999943]
 manual_grad_W=48.568
 manual_grad_b=14.4352
 ***********************

您可以看到,TensorFlow在第二个迭代中计算W和B的不正确损耗值和梯度,实际上与第一个迭代相同。在某些试验中,它开始与第三或第四次迭代的实际值不同。并非总是在第二个。我在这里做错了吗?一旦获得W和B的值及其梯度的值,我就会在训练循环中使用tf.assign()更新其值。问题在这里吗?使用TensorFlow更新变量是错误的吗?

,真正灰心色地遇到了此类问题。

我认为使用tf.assign存在问题。命令tf.assign创建分配节点,应运行以有效。您应该更改为

之类的东西
assign_W_placeholder = tf.placeholder(tf.float32)
assign_b_placeholder = tf.placeholder(tf.float32)
assign_W_node = tf.assign(W, assign_W_placeholder)
assign_b_node = tf.assign(b, assign_b_placeholder)

然后在for循环中,添加

之类的东西
sess.run(assign_W_node, feed_dict={assign_W_placeholder: new_W_value}
sess.run(assign_b_node, feed_dict={assign_b_placeholder: new_b_value}

之后,张量和手动给出相同的结果。

完整的代码:

import numpy as np
import tensorflow as tf

def manual_loss(_w, _b, _x, _y):
    _loss = 0.0
    n = len(_x)
    for j in range(n):
       _loss += (_w * _x[j] + _b - _y[j]) ** 2
    return _loss

def manual_grads(_w, _b, _x, _y):
     n = len(_x)
     g_w = 0.0
     g_b = 0
     for j in range(n):
         g_w += 2.0 * (_w * _x[j] + _b - _y[j]) * _x[j]
         g_b += 2.0 * (_w * _x[j] + _b - _y[j])
     return g_w, g_b

# Model parameters
W = tf.Variable([0.3], dtype=tf.float32)
b = tf.Variable([-0.3], dtype=tf.float32)
_W = 0.3
_b = -0.3
# Model input and output
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
assign_W_placeholder = tf.placeholder(tf.float32)
assign_b_placeholder = tf.placeholder(tf.float32)
assign_W_node = tf.assign(W, assign_W_placeholder)
assign_b_node = tf.assign(b, assign_b_placeholder)
# loss
loss = tf.reduce_sum(tf.square(linear_model - y))  # sum of the squares
grads = tf.gradients(loss, [W, b])
# training data
x_train = [1, 2, 3, 4]
y_train = [0, -1, -2, -3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)  
lr = 0.001
for i in range(1000):
    results = sess.run([loss, W, b, grads], {x: x_train, y: y_train})
    loss_value = results[0]
    W_value = results[1]
    b_value = results[2]
    grad_W = results[3][0]
    grad_b = results[3][1]
    manual_loss_value = manual_loss(_w=_W, _b=_b, _x=x_train, _y=y_train)
    manual_grad_W, manual_grad_b = manual_grads(_w=_W, _b=_b, _x=x_train, _y=y_train)
    new_W_value = W_value - lr * grad_W
    new_b_value = b_value - lr * grad_b
    sess.run([assign_W_node, assign_b_node], 
             feed_dict={assign_W_placeholder: new_W_value, assign_b_placeholder: new_b_value})
    print("***********************")
    print("loss={0}".format(loss_value))
    print("manual_loss_value={0}".format(manual_loss_value))
    print("W={0}".format(W_value))
    print("b={0}".format(b_value))
    print("manual_W={0}".format(_W))
    print("manual_b={0}".format(_b))
    print("grad_W={0}".format(grad_W))
    print("grad_b={0}".format(grad_b))
    print("manual_grad_W={0}".format(manual_grad_W))
    print("manual_grad_b={0}".format(manual_grad_b))
    print("***********************")
    _W -= lr * manual_grad_W
    _b -= lr * manual_grad_b

我认为您有数字精度的问题。Numpy默认使用Double Floats(64位(。您将张量宣布为tf.float32。尝试将它们更改为tf.float64

编辑:我认为差异是由于损失函数的指示。尝试更改乘法,如以下内容:

_loss += (_w * _x[j] + _b - _y[j]) * (_w * _x[j] + _b - _y[j])

import numpy as np
import tensorflow as tf

def manual_loss(_w, _b, _x, _y):
    _loss = 0.0
    n = len(_x)
    for j in range(n):
       diff = (_w * _x[j] + _b - _y[j])
       _loss += diff * diff
    return _loss

def manual_grads(_w, _b, _x, _y):
     n = len(_x)
     g_w = 0.0
     g_b = 0
     for j in range(n):
         g_w += 2.0 * (_w * _x[j] + _b - _y[j]) * _x[j]
         g_b += 2.0 * (_w * _x[j] + _b - _y[j])
     return g_w, g_b

# Model parameters
W = tf.Variable([0.3], dtype=tf.float64)
b = tf.Variable([-0.3], dtype=tf.float64)
_W = 0.3
_b = -0.3
# Model input and output
x = tf.placeholder(tf.float64)
linear_model = W * x + b
y = tf.placeholder(tf.float64)
# loss
loss = tf.reduce_sum(tf.square(linear_model - y))  # sum of the squares
grads = tf.gradients(loss, [W, b])
# training data
x_train = [1, 2, 3, 4]
y_train = [0, -1, -2, -3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)  
lr = 0.001
for i in range(10):
    with tf.device('cpu:0'):
        results = sess.run([loss, grads], {x: x_train, y: y_train})
    loss_value = results[0]
    grad_W = results[1][0]
    grad_b = results[1][1]
    manual_loss_value = manual_loss(_w=_W, _b=_b, _x=x_train, _y=y_train)
    manual_grad_W, manual_grad_b = manual_grads(_w=_W, _b=_b, _x=x_train, _y=y_train)
    new_W_value = (W - lr * grad_W).eval(session = sess)
    new_b_value = (b - lr * grad_b).eval(session = sess)
    tf.assign(W, value=new_W_value).eval(session = sess)
    tf.assign(b, value=new_b_value).eval(session = sess)
    print("***********************")
    print("loss={0}".format(loss_value))
    print("manual_loss_value={0}".format(manual_loss_value))
    print("W={0}".format(W.eval(session = sess)))
    print("b={0}".format(b.eval(session = sess)))
    print("manual_W={0}".format(_W))
    print("manual_b={0}".format(_b))
    print("grad_W={0}".format(grad_W))
    print("grad_b={0}".format(grad_b))
    print("manual_grad_W={0}".format(manual_grad_W))
    print("manual_grad_b={0}".format(manual_grad_b))
    print("***********************")
    _W -= lr * manual_grad_W
    _b -= lr * manual_grad_b

最新更新