TensorFlow/keras "An op outside of the function building code is being passed a 'Graph' Tensor"



我是Tensorflow/Keras的新手,我一直在关注《使用Scikit Learn和Tensorflow进行机器学习》一书。第12章介绍了定制Tensorflow,下面是相关笔记本(这里(,我已经找到了以下定制模型:

class ReconstructingRegressor(keras.models.Model):
def __init__(self, output_dim, **kwargs):
super().__init__(**kwargs)
self.hidden = [keras.layers.Dense(30, activation="selu",
kernel_initializer="lecun_normal")
for _ in range(5)]
self.out = keras.layers.Dense(output_dim)
def build(self, batch_input_shape):
n_inputs = batch_input_shape[-1]
self.reconstruct = keras.layers.Dense(n_inputs)
super().build(batch_input_shape)

def call(self, inputs, training=None):
Z = inputs
for layer in self.hidden:
Z = layer(Z)
reconstruction = self.reconstruct(Z)
recon_loss = tf.reduce_mean(tf.square(reconstruction - inputs))        
self.add_loss(0.05 * recon_loss)
return self.out(Z)

当我使用这个模型去训练时,我会得到以下错误:

TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
@tf.function
def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
The graph tensor has name: mul:0

问题是self.add_loss(0.05 * recon_loss);在评论完之后,一切都很好。假设recon_loss"Graph" tensorself.add_loss()op outside of the function building code,但是——如果这适用于add_loss()——我不知道如何从call()中增加损失。

全面披露:我在写这本书时使用的是Tensorflow 2.3,考虑到了2.1,所以我并没有真正遵循说明。也就是说,我真的很好奇如何解决这个问题,以我目前的知识水平,我基本上感到无能为力。似乎应该工作——否则如何添加到损失函数中?如有任何帮助,我们将不胜感激。

完整示例:

import tensorflow as tf
from tensorflow import keras
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
housing = fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(
housing.data, housing.target.reshape(-1, 1), random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(
X_train_full, y_train_full, random_state=42)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_valid_scaled = scaler.transform(X_valid)
X_test_scaled = scaler.transform(X_test)
class ReconstructingRegressor(keras.models.Model):
def __init__(self, output_dim, **kwargs):
super().__init__(**kwargs)
self.hidden = [keras.layers.Dense(30, activation="selu",
kernel_initializer="lecun_normal")
for _ in range(5)]
self.out = keras.layers.Dense(output_dim)
def build(self, batch_input_shape):
n_inputs = batch_input_shape[-1]
self.reconstruct = keras.layers.Dense(n_inputs)
super().build(batch_input_shape)

def call(self, inputs, training=None):
Z = inputs
for layer in self.hidden:
Z = layer(Z)
reconstruction = self.reconstruct(Z)
recon_loss = tf.reduce_mean(tf.square(reconstruction - inputs))

self.add_loss(0.05 * recon_loss)
return self.out(Z)
model = ReconstructingRegressor(1, dynamic=True)
model.compile(loss="mse", optimizer="nadam")
history = model.fit(X_train_scaled, y_train, epochs=2)

尽管我认为现在回答这个问题已经太晚了。。。让我给你看看我试过什么。

首先,我删除了自定义模型的构建功能

def build(self, batch_input_shape):
n_inputs = batch_input_shape[-1]
self.reconstruct = keras.layers.Dense(n_inputs)
super().build(batch_input_shape)

使用自定义层编译OR时,run_eagerly = True计算自定义损失有效例如,编写一个自定义层代码:

class ReconLoss(keras.layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def call(self, inputs):
x, reconstruction = inputs
recon_loss = tf.reduce_mean(tf.square(reconstruction - x))
self.add_loss(0.05 * recon_loss)
return

然后在自定义模型的__init__中分配一个实例,将self.ReconLoss([x, reconstruction])插入到自定义模型的调用方法中。

已编辑colab代码:https://colab.research.google.com/drive/1Hwi6auz2meKvD0ogdDSywb2E4_J1F9S_?usp=sharing

我仍然不明白为什么会出现错误,但这对我有效。


参考文献:
  • Tensorflow 2.1.0-函数构建代码之外的操作正在被传递一个";图形";张量
  • Python代码的深度学习不再有效"TypeError:函数构建代码之外的操作正在被传递一个"错误";图形";张量">
  • https://github.com/tensorflow/addons/issues/711

相关内容

最新更新