"尝试在不构建函数的情况下捕获 EagerTensor"错误:构建联合平均过程时



我在尝试构建我的联邦平均过程时收到"尝试在不构建函数的情况下捕获 EagerTensor"错误。我已经尝试了其他类似堆栈溢出问题中给出的 v1 和 v2 兼容性的所有补救措施,即使用 tf.compat.v1.enable_eager_execution() 、tf.disable_v2_behaviour() 等。但是,没有任何效果。下面给出了我的狂欢代码摘录。我在 Python 笔记本中的完整代码在这里给出 https://gist.github.com/aksingh2411/60796ee58c88e0c3f074c8909b17b5a1。

#Making a Tensorflow Model
from tensorflow import keras
def create_keras_model():
return tf.keras.models.Sequential([
hub.KerasLayer(encoder, input_shape=[],dtype=tf.string,trainable=True),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dense(16, activation='relu'),
keras.layers.Dense(1, activation='sigmoid'),
])
def model_fn():
# We _must_ create a new model here, and _not_ capture it from an external
# scope. TFF will call this within different graph contexts.
keras_model = create_keras_model()
return tff.learning.from_keras_model(
keras_model,
input_spec=preprocessed_example_dataset.element_spec,
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.Accuracy()])
# Building the Federated Averaging Process
iterative_process = tff.learning.build_federated_averaging_process(
model_fn,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02),
server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0))
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-23-68fa27e65b7e> in <module>()
3     model_fn,
4     client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.02),
-->5     server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0))
9 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/impl/api.py in 
wrapper(*args, **kwargs)
263       except Exception as e:  # pylint:disable=broad-except
264         if hasattr(e, 'ag_error_metadata'):
--> 265           raise e.ag_error_metadata.to_exception(e)
266         else:
267           raise
RuntimeError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow_hub/keras_layer.py:222 call  *
result = f()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py:486 _call_attribute  **
return instance.__call__(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py:580 __call__
result = self._call(*args, **kwds)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py:618 _call
results = self._stateful_fn(*args, **kwds)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:2420 __call__
return graph_function._filtered_call(args, kwargs)  # pylint: disable=protected-access
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:1665 _filtered_call
self.captured_inputs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:1760 _call_flat
flat_outputs = forward_function.call(ctx, args_with_tangents)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py:627 call
executor_type=executor_type)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/functional_ops.py:1148 partitioned_call
args = [ops.convert_to_tensor(x) for x in args]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/functional_ops.py:1148 <listcomp>
args = [ops.convert_to_tensor(x) for x in args]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:1307 convert_to_tensor
raise RuntimeError("Attempting to capture an EagerTensor without "
RuntimeError: Attempting to capture an EagerTensor without building a function.

这看起来像张量是在外面创建的,然后被model_fn捕获。model_fn()里面的评论在这里相关:

# We _must_ create a new model here, and _not_ capture it from an external scope. TFF 
# will call this within different graph contexts.

TensorFlow 不允许引用在不同图(或tf.function)中创建的张量,因此我们必须构建将在独立model_fn()(或在内部create_keras_model()内)引用的所有内容。

要查找创建错误张量的位置,检查堆栈跟踪可能很有用。堆栈跟踪的第一行似乎指示tensorflow_hub

/usr/local/lib/python3.6/dist-packages/tensorflow_hub/keras_layer.py:222 call  *
result = f()

源代码中立即显示使用 TF Hub 的位置是tf.kears.Sequential构造的第一层:

def create_keras_model():
return tf.keras.models.Sequential([
hub.KerasLayer(encoder, input_shape=[],dtype=tf.string,trainable=True),
…

似乎这个函数可能是"关闭"或"捕获"encoder的值,而又可能具有在不同上下文中创建的张量。是否可以将encoder的构造移动到内部create_keras_model()

最新更新