TFLiteCoverter在运行整数量化时出现分段故障



我使用tensorflow==1.15.3,在尝试int8训练后量化时遇到了分割错误。可在此处找到TFLiteCoverter 1.15版本的文档。

我在github上发现了类似的问题,但他们提供--add_postprocessing_op=true的解决方案并没有解决分割故障。

我使用PDB对它进行了调试,并找到了它崩溃的确切位置。它从未达到我的representative_dataset功能。运行CreateWrapperCPPFromBuffer(model_content):时出现故障

> .../python3.6/site-packages/tensorflow_core/lite/python/optimize/calibrator.py(51)__init__()
-> .CreateWrapperCPPFromBuffer(model_content))
(Pdb) s
Fatal Python error: Segmentation fault
Current thread 0x00007ff40ee9f740 (most recent call first):
File ".../python3.6/site-packages/tensorflow_core/lite/python/optimize/calibrator.py", line 51 in __init__
File ".../python3.6/site-packages/tensorflow_core/lite/python/lite.py", line 236 in _calibrate_quantize_model
File ".../python3.6/site-packages/tensorflow_core/lite/python/lite.py", line 993 in convert
File ".../convert_model_to_tflite_int8.py", line 97 in <module>
File "<string>", line 1 in <module>
File "/usr/lib/python3.6/bdb.py", line 434 in run
File "/usr/lib/python3.6/pdb.py", line 1548 in _runscript
File "/usr/lib/python3.6/pdb.py", line 1667 in main
File "/usr/lib/python3.6/pdb.py", line 1694 in <module>
File "/usr/lib/python3.6/runpy.py", line 85 in _run_code
File "/usr/lib/python3.6/runpy.py", line 193 in _run_module_as_main
[1]    17668 segmentation fault (core dumped)  python -m pdb convert_model_to_tflite_int8.py  --add_postprocessing_op=true

这是我的转换代码:

converter = tf.lite.TFLiteConverter.from_frozen_graph(
graph_def_file=pb_model_path,
input_arrays=["device_0/input_node_name:1"],
output_arrays=["device_0/output_node_name"],
input_shapes={"device_0/input_node_name:1": [100, 16384]}
)
converter.allow_custom_ops = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
def test():
pdb.set_trace()
print(' ! ! ! representative_dataset_gen ! ! ! ')
zeros = np.zeros(shape=(1, 100, 16384), dtype='int8')
ds = tf.data.Dataset.from_tensor_slices((zeros)).batch(1)
for input_value in ds.take(1):
yield [input_value]
converter.representative_dataset = test
pdb.set_trace()
tflite_model = converter.convert()
tflite_model_size = open(model_name, 'wb').write(tflite_model)
print('TFLite Model is %d bytes' % tflite_model_size)

FWIW我的模型转换适用于tf.float16(但不使用representative_dataset(。

将我的tf版本升级到2.3解决了分段故障。我的模型代码还不兼容tf==2.x,但幸运的是转换代码与之无关,所以升级进行得很顺利。

最新更新