是否需要将tf.compat.v1.disable_eager_execution()添加到export_inferen



我发现了一个关于这个错误的问题(在不同的场景中(,许多Github问题和文章,但它似乎总是与人们从TF 1.x升级到TF 2.x有关。我不会这么做。

以下是我的版本:

  • tensorflow 2.5.0
  • tensorflow插件0.13.0
  • tensorflow数据集4.3.0
  • tensorflow估计器2.5.0
  • tensorflow gpu 2.5.0

我正在尝试使用TF对象检测,通过Python将在TF 2.5中训练的模型转换为tensorflow.js兼容的模型,并询问了一个问题。给出的答案是从运行开始

python export_inference_graph.py 
--input_type image_tensor 
--pipeline_config_path path/to/ssd_inception_v2.config 
--trained_checkpoint_prefix path/to/model.ckpt 
--output_directory path/to/exported_model_directory

所以我的命令最终是:

py Tensorflowmodelsresearchobject_detectionexport_inference_graph.py
--input_type image_tensor
--pipeline_config_path Tensorflowworkspacemodelsmy_ssd_mobnetpipeline.config
--trained_checkpoint_prefix Tensorflowworkspacepre-trained-modelsssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8checkpointckpt-0.data-00000-of-00001
--output_directory Tensorflowworkspacemodelsmy_ssd_mobnetexport

导致错误的原因:

RuntimeError: tf.placeholder() is not compatible with eager execution

我确实在日志中看到了这个错误的常见原因,我知道它来自哪里

File "C:UsersjonatAppDataLocalProgramsPythonPython38libsite-packagesobject_detection-0.1-py3.8.eggobject_detectionexporter.py", line 186, in _image_tensor_input_placeholder
input_tensor = tf.placeholder(

但我不明白如何处理这个问题,因为我没有写任何一个Tensorflow模块,我只是试图对现有模块做一些基本的事情,比如将tf.train.Checkpoint转换为SavedModel。

通常情况下,答案似乎是调用tf.compat.v1.disable_eager_execution(),但奇怪的是,这不是我的代码,我不知道通过禁用一个功能,我可能会在这个转换脚本中破坏什么。我对Tensorflow API还不够好,还不能真正理解这个脚本。

完整日志和跟踪:

2021-07-15 09:40:24.482953: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
2021-07-15 09:40:26.835151: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library nvcuda.dll
2021-07-15 09:40:26.856379: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 2080 SUPER computeCapability: 7.5
coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s
2021-07-15 09:40:26.856487: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
2021-07-15 09:40:26.861810: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublas64_11.dll
2021-07-15 09:40:26.861891: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublasLt64_11.dll
2021-07-15 09:40:26.864685: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cufft64_10.dll
2021-07-15 09:40:26.865561: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library curand64_10.dll
2021-07-15 09:40:26.872246: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusolver64_11.dll
2021-07-15 09:40:26.874465: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusparse64_11.dll
2021-07-15 09:40:26.874979: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudnn64_8.dll
2021-07-15 09:40:26.875238: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-15 09:40:26.876220: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-07-15 09:40:26.877353: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 2080 SUPER computeCapability: 7.5
coreClock: 1.845GHz coreCount: 48 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 462.00GiB/s
2021-07-15 09:40:26.877556: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-07-15 09:40:27.285985: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-07-15 09:40:27.286153: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0
2021-07-15 09:40:27.286917: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N
2021-07-15 09:40:27.287164: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5957 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 SUPER, pci bus id: 0000:01:00.0, compute capability: 7.5)
Traceback (most recent call last):
File "Tensorflowmodelsresearchobject_detectionexport_inference_graph.py", line 206, in <module>
tf.app.run()
File "C:UsersjonatAppDataLocalProgramsPythonPython38libsite-packagestensorflowpythonplatformapp.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:UsersjonatAppDataLocalProgramsPythonPython38libsite-packagesabslapp.py", line 303, in run
_run_main(main, args)
File "C:UsersjonatAppDataLocalProgramsPythonPython38libsite-packagesabslapp.py", line 251, in _run_main
sys.exit(main(argv))
File "Tensorflowmodelsresearchobject_detectionexport_inference_graph.py", line 194, in main
exporter.export_inference_graph(
File "C:UsersjonatAppDataLocalProgramsPythonPython38libsite-packagesobject_detection-0.1-py3.8.eggobject_detectionexporter.py", line 611, in export_inference_graph
_export_inference_graph(
File "C:UsersjonatAppDataLocalProgramsPythonPython38libsite-packagesobject_detection-0.1-py3.8.eggobject_detectionexporter.py", line 503, in _export_inference_graph
outputs, placeholder_tensor_dict = build_detection_graph(
File "C:UsersjonatAppDataLocalProgramsPythonPython38libsite-packagesobject_detection-0.1-py3.8.eggobject_detectionexporter.py", line 457, in build_detection_graph
placeholder_tensor, input_tensors = input_placeholder_fn_map[input_type](
File "C:UsersjonatAppDataLocalProgramsPythonPython38libsite-packagesobject_detection-0.1-py3.8.eggobject_detectionexporter.py", line 186, in _image_tensor_input_placeholder
input_tensor = tf.placeholder(
File "C:UsersjonatAppDataLocalProgramsPythonPython38libsite-packagestensorflowpythonopsarray_ops.py", line 3268, in placeholder
raise RuntimeError("tf.placeholder() is not compatible with "
RuntimeError: tf.placeholder() is not compatible with eager execution.

我在这里做什么会导致这个错误?我是否安装了错误版本的转换脚本?我检查了一下,我有来自官方回购的最新Tensorflow文件,export_inference_graph.py就是在这里找到的。转换脚本是否就是不适用于Tensorflow 2.x?我需要用tf.compat.v1.disable_eager_execution()修改转换脚本吗?由于我禁用了一个功能,这会在脚本中引起其他问题吗?

编辑:

我知道物体检测中的一些模型是为tf 1.x(模型动物园(和其他2.x(模型公园(构建的。我验证了我有一个2.x模型,所以这不是原因。

Tensorflow允许您以多种不同的格式(检查点或savedmodel(保存模型。检查点只保存每个层的权重,因此在加载模型时,您需要首先定义网络架构,然后加载权重。SavedModel保存完整的模型,即架构、权重和训练配置(包括优化器权重(。此链接包含与可用的各种格式相关的更多详细信息。

https://www.tensorflow.org/tutorials/keras/save_and_load

在您的情况下,由于tfjs需要将savedmodel作为输入,因此您可以直接将tensorflow模型保存在savedmodel中,而不是先将其保存为检查点,然后尝试将其转换为savedmModel格式。

相关内容

  • 没有找到相关文章

最新更新