我有一个运行模型,使用 :
model = tf.keras.Model(inputs=input_layers, outputs=outputs)
如果我尝试向输出添加一个简单的常量,则会收到一条错误消息。 前任:
output = output + [tf.constant(['label1', 'label2'], dtype = tf.string)]
model = tf.keras.Model(inputs=input_layers, outputs=outputs)
错误消息:AttributeError: Tensor.op is meaningless when eager execution is enabled.
有没有办法将其添加到模型中,即使在训练后或在 save(( 时间。
这个想法是在服务期间将常量作为输出。
具有 errrror 的完整网络示例:
import tensorflow as tf
import tensorflow.keras as keras
input = keras.layers.Input(shape=(2,))
hidden = keras.layers.Dense(10)(input)
output = keras.layers.Dense(3, activation='sigmoid')(hidden)
model = keras.models.Model(inputs=input, outputs=[output, tf.constant(['out1','out2','out3'], dtype=tf.string)])
错误
in <module>
5 hidden = keras.layers.Dense(10)(input)
6 output = keras.layers.Dense(3, activation='sigmoid')(input)
----> 7 model = keras.models.Model(inputs=input, outputs=[output, tf.constant(['out1','out2','out3'], dtype=tf.string)])
/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py in __init__(self, *args, **kwargs)
144
145 def __init__(self, *args, **kwargs):
--> 146 super(Model, self).__init__(*args, **kwargs)
147 _keras_api_gauge.get_cell('model').set(True)
148 # initializing _distribution_strategy here since it is possible to call
/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py in __init__(self, *args, **kwargs)
165 'inputs' in kwargs and 'outputs' in kwargs):
166 # Graph network
--> 167 self._init_graph_network(*args, **kwargs)
168 else:
169 # Subclassed network
/lib/python3.6/site-packages/tensorflow_core/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
455 self._self_setattr_tracking = False # pylint: disable=protected-access
456 try:
--> 457 result = method(self, *args, **kwargs)
458 finally:
459 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py in _init_graph_network(self, inputs, outputs, name, **kwargs)
268
269 if any(not hasattr(tensor, '_keras_history') for tensor in self.outputs):
--> 270 base_layer_utils.create_keras_history(self._nested_outputs)
271
272 self._base_init(name=name, **kwargs)
/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer_utils.py in create_keras_history(tensors)
182 keras_tensors: The Tensors found that came from a Keras Layer.
183 """
--> 184 _, created_layers = _create_keras_history_helper(tensors, set(), [])
185 return created_layers
186
/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
208 if getattr(tensor, '_keras_history', None) is not None:
209 continue
--> 210 op = tensor.op # The Op that created this Tensor.
211 if op not in processed_ops:
212 # Recursively set `_keras_history`.
/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py in op(self)
1078 def op(self):
1079 raise AttributeError(
-> 1080 "Tensor.op is meaningless when eager execution is enabled.")
1081
1082 @property
AttributeError: Tensor.op is meaningless when eager execution is enabled.
使用 Python 3.6 和 Tensorflow 2.0
将常量放在 Lambda 层中。Keras 会做一些额外的簿记,所以你需要的不仅仅是 tf 操作才能正常工作。使用 Lambda 层将为您执行此操作。
编辑以给出其工作原理的示例: 您的最后一个示例将转换为以下代码
import tensorflow as tf
import tensorflow.keras as keras
inputs = keras.layers.Input(shape=(2,))
hidden = keras.layers.Dense(10)(inputs)
output1 = keras.layers.Dense(3, activation='sigmoid')(hidden)
@tf.function
def const(tensor):
batch_size = tf.shape(tensor)[0]
constant = tf.constant(['out1','out2','out3'], dtype=tf.string)
constant = tf.expand_dims(constant, axis=0)
return tf.broadcast_to(constant, shape=(batch_size, 3))
output2 = keras.layers.Lambda(const)(inputs)
model = keras.models.Model(inputs=inputs, outputs=[output1, output2])
编辑:这让我想起了我不久前的一个项目,我不得不在 Keras 模型中使用大量常量。当时我为它写了一个图层
class ConstantOnBatch(keras.layers.Layer):
def __init__(self, constant, *args, **kwargs):
self._initial_constant = copy.deepcopy(constant)
self.constant = K.constant(constant)
self.out_shape = self.constant.shape.as_list()
self.constant = tf.reshape(self.constant, [1]+self.out_shape)
super().__init__(*args, **kwargs)
def build(self, input_shape):
super().build(input_shape)
def call(self, inputs):
batch_size = tf.shape(inputs)[0]
output_shape = [batch_size]+self.out_shape
return tf.broadcast_to(self.constant, output_shape)
def compute_output_shape(self, input_shape):
input_shape = input_shape.as_list()
return [input_shape[0]]+self.out_shape
def get_config(self):
base_config = super().get_config()
base_config['constant'] = self._initial_constant
@classmethod
def from_config(cls, config):
return cls(**config)
它可能需要对 tf2 进行一些更新,并且代码肯定可以用更好的方式编写,但如果您需要大量常量,这可能会为比使用大量 Lambda 层更优雅的解决方案提供基础。