当通过'张量'损耗时,需要 ' tape '

  • 本文关键字:需要 tape 张量 tensorflow
  • 更新时间 :
  • 英文 :


关于tf的几个问题。

import numpy as np
import tensorflow as tf
from tensorflow import keras
x_train = [1,2,3]
y_train = [1,2,3]
W = tf.Variable(tf.random.normal([1]), name = 'weight')
b = tf.Variable(tf.random.normal([1]), name = 'bias')
hypothesis = W*x_train+b
optimizer = tf.optimizers.SGD (learning_rate=0.01)
train = tf.keras.optimizers.Adam().minimize(cost, var_list=[W, b])

当我开始最后一行代码时,出现了下面的错误。

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-52-cd6e22f66d09> in <module>()
----> 1 train = tf.keras.optimizers.Adam().minimize(cost, var_list=[W, b])
1 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py in _compute_gradients(self, loss, var_list, grad_loss, tape)
530     # TODO(josh11b): Test that we handle weight decay in a reasonable way.
531     if not callable(loss) and tape is None:
--> 532       raise ValueError("`tape` is required when a `Tensor` loss is passed.")
533     tape = tape if tape is not None else backprop.GradientTape()
534 
ValueError: `tape` is required when a `Tensor` loss is passed.

我知道它与tensorflow版本2有关,但不想修改为版本1。

想要一个tensorflow ver2的解。Thx .

由于您没有提供cost函数,所以我添加了一个。下面是代码

import numpy as np
import tensorflow as tf
from tensorflow import keras

x_train = [1,2,3]
y_train = [1,2,3]
W = tf.Variable(tf.random.normal([1]), name = 'weight')
b = tf.Variable(tf.random.normal([1]), name = 'bias')
hypothesis = W*x_train+b
@tf.function
def cost():
y_model = W*x_train+b
error = tf.reduce_mean(tf.square(y_train- y_model))
return error

optimizer = tf.optimizers.SGD (learning_rate=0.01)
train = tf.keras.optimizers.Adam().minimize(cost, var_list=[W, b])
tf.print(W)
tf.print(b)

相关内容

  • 没有找到相关文章

最新更新