Lambda 层:函数构建代码之外的操作正在传递"图"张量



我有一个类似于这篇文章的问题: 张量流 2.1.0 - 函数构建代码之外的操作正在传递"图"张量

我的模型包括一个围绕CNN的TimeDistributed包装器,其输入层首先由lambda函数规范化:

# ENCODER
# video input
frames = 10
video_input = tf.keras.Input(shape=(frames, 256, 256, 3))
# cnn 1 frame input
frame_in = Input(shape=(256,256,3), name='frame_input')
#normalisation layer
cnn = Lambda(lambda inputs: frame_in / 255)(frame_in)
# add cnn layers + activations
cnn = Conv2D(filters=num_filters[0], kernel_size=(4,4), strides=(4,4), padding='same', activation = 'relu', data_format='channels_last', name = 'Layer_1')(cnn) #input_shape=(64,64,3), data_format='channels_last'), name = 'Layer_1')(frame_in)
cnn = Conv2D(filters=num_filters[1], kernel_size=(4,4), strides=(4,4), padding='same', activation = 'relu', name = 'Layer_2')(cnn)
cnn = Conv2D(filters=num_filters[2], kernel_size=(4,4), strides=(2,2), padding='same', activation = 'relu', name = 'Layer_3')(cnn)
cnn = Conv2D(filters=num_filters[3], kernel_size=(4,4), strides=(2,2), padding='same', activation = 'relu', name = 'Layer_4')(cnn)
cnn = Conv2D(filters=num_filters[4], kernel_size=(4,4), strides=(1,1), padding='valid', activation = 'relu', name = 'Layer_5')(cnn)
# flatten
cnn = Flatten()(cnn)
#time distributed
time_dist_cnn = Model(inputs=frame_in, outputs=cnn, name='Time_Distributed_CNN')
time_dist_0 = TimeDistributed(time_dist_cnn)(video_input)

错误是相同的:

TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
@tf.function
def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
The graph tensor has name: frame_input:0

我知道问题几乎相同,并且源于lambda表达式的使用,但我不确定在这种情况下如何解决它。

如果在 eager 模式下运行,tensorflow 操作将检查输入是否为tensorflow.python.framework.ops.EagerTensor类型,并且 keras ops 是否作为 DAG 实现。因此,如果 eager 模式的输入为tensorflow.python.framework.ops.Tensor,则抛出错误。

您可以使用tf.config.experimental_run_functions_eagerly(True)显式告诉张量流以 keras 的预先模式运行,从而将输入类型更改为 EagerTensor。添加此语句应该可以解决您的问题。

或者,通过在程序顶部添加tf.compat.v1.disable_eager_execution()来禁用预先执行也将成功运行程序。

例如,该程序会抛出您面临的错误 -

重现错误的代码-

import numpy as np
import tensorflow as tf
print(tf.__version__)
from tensorflow.keras import layers, losses, models
def get_loss_fcn(w):
def loss_fcn(y_true, y_pred):
loss = w * losses.mse(y_true, y_pred)
return loss
return loss_fcn
data_x = np.random.rand(5, 4, 1)
data_w = np.random.rand(5, 4)
data_y = np.random.rand(5, 4, 1)
x = layers.Input([4, 1])
w = layers.Input([4])
y = layers.Activation('tanh')(x)
model = models.Model(inputs=[x, w], outputs=y)
loss = get_loss_fcn(model.input[1])
model.compile(loss=loss)
model.fit((data_x, data_w), data_y)

输出-

2.2.0
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
59     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60                                         inputs, attrs, num_outputs)
61   except core._NotOkStatusException as e:
TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
@tf.function
def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
The graph tensor has name: input_8:0
During handling of the above exception, another exception occurred:
_SymbolicException                        Traceback (most recent call last)
8 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
72       raise core._SymbolicException(
73           "Inputs to eager execution function cannot be Keras symbolic "
---> 74           "tensors, but found {}".format(keras_symbolic_tensors))
75     raise e
76   # pylint: enable=protected-access
_SymbolicException: Inputs to eager execution function cannot be Keras symbolic tensors, but found [<tf.Tensor 'input_8:0' shape=(None, 4) dtype=float32>]

解决方案 1 -在程序顶部添加此tf.config.experimental_run_functions_eagerly(True)可成功运行程序。此外,在 progrm 的顶部添加tf.compat.v1.disable_eager_execution()以禁用预先执行也会成功运行程序。

固定代码 -

import numpy as np
import tensorflow as tf
print(tf.__version__)
from tensorflow.keras import layers, losses, models
tf.config.experimental_run_functions_eagerly(True)
def get_loss_fcn(w):
def loss_fcn(y_true, y_pred):
loss = w * losses.mse(y_true, y_pred)
return loss
return loss_fcn
data_x = np.random.rand(5, 4, 1)
data_w = np.random.rand(5, 4)
data_y = np.random.rand(5, 4, 1)
x = layers.Input([4, 1])
w = layers.Input([4])
y = layers.Activation('tanh')(x)
model = models.Model(inputs=[x, w], outputs=y)
loss = get_loss_fcn(model.input[1])
model.compile(loss=loss)
model.fit((data_x, data_w), data_y)
print('Done.')

输出-

2.2.0
1/1 [==============================] - 0s 1ms/step - loss: 0.0000e+00
Done.

解决方案2- 也请尝试以下解决方案,您可以尝试按如下方式修改,看看是否有效 -

1( 创建normalize_input函数如下:

def normalize_input(inputs):
norm_input = inputs / 255
return norm_input

2( 修改Lambda图层,如下所示:

cnn = tf.keras.layers.Lambda(function=normalize_input)([frame_in])

希望这能回答你的问题。快乐学习。

最新更新