在使用TensorFlow分布式培训时,为TensorFlow服务保存模型



我们计划在TensorFlow中实施分布式培训。为此,我们使用TensorFlow分布式(https://www.tensorflow.org/deploy/distribated)。我们能够使用"图形复制训练之间的异步"来实现分布式培训。以下是代码段。


.....
ps_hosts = FLAGS.ps_hosts.split(",")
worker_hosts = FLAGS.worker_hosts.split(",")
# Create a cluster from the parameter server and worker hosts.
cluster = tf.train.ClusterSpec({"ps": ps_hosts, "worker": worker_hosts})
# Create and start a server for the local task.
server = tf.train.Server(cluster,
                       job_name=FLAGS.job_name,
                       task_index=FLAGS.task_index)
img_width, img_height = 124, 124
if FLAGS.job_name == "ps":
server.join()
elif FLAGS.job_name == "worker":
####### Assigns ops to the local worker by default.#######
with tf.device(tf.train.replica_device_setter(
    worker_device="/job:worker/task:%d" % FLAGS.task_index,
    cluster=cluster)):
######## set Keras learning phase to train #######
  K.set_learning_phase(1)
  # do not initialize variables on the fly
  K.manual_variable_initialization(True)
  if K.image_data_format() == 'channels_first':
      input_shape = (3, img_width, img_height)
  else:
      input_shape = (img_width, img_height, 3)
  X= tf.placeholder(tf.float32, shape= [None, img_width, img_height, 3], name= "X")
  Y= tf.placeholder(tf.float32, shape= [None, n_classes], name="Y")
  print("Building keras model")
  ....
  ....
  ####### Defining our total loss ####### 
  ###### Defining our TF Optimizer and passing hyperparameters ######      
  .......
  ...........
  ...............

我们已经定义了我们的培训主管,如下所示。

sv = tf.train.Supervisor(is_chief=(FLAGS.task_index == 0),
                           logdir=logdir,
                           init_op=init_op,
                           saver=saver,
                           summary_op=summary_op,
                           global_step=global_step)

我们还使用以下代码段来初始化主管。

with sv.prepare_or_wait_for_session(server.target) as sess:

然后,我们在培训期间已经通过了不同的批次。在此部分之前,一切都很好。但是,当我们尝试保存/导出张量流服务的模型时,它不会生成正确的检查点文件集,以便我们可以在生产中使用它。通过Tensorflow_model_server托管检查点文件时,我们要低于错误。


 Loading servable: {name: default version: 2} failed: Invalid argument: 
 Cannot assign a device for operation 'init_all_tables': Operation was 
 explicitly assigned to /job:worker/task:0 but available devices are [ 
 /job:localhost/replica:0/task:0/cpu:0 ]. Make sure the device specification 
 refers to a valid device.[[Node: init_all_tables = 
 NoOp[_device="/job:worker/task:0"]()]] 

只是要注意,我们还尝试了以下方法来保存训练的图。

i)savedmodelbuilder

builder = saved_model_builder.SavedModelBuilder(export_path)

ii)模型出口商

export_path = "/saved_graph/"
model_exporter.export(export_path, sess)

iii)tf.train.saver-函数

tf.train.Saver
  • ,但最上面的情况都没有看到成功。

我们找不到任何直接的文章,其中显示了一个完整的示例或详细说明。我们已经通过了以下参考链接。

https://github.com/tensorflow/tensorflow/issues/5439https://github.com/tensorflow/tensorflow/issues/5110使用InvalidArgumentError运行分布式TensorFlow:您必须使用DTYPE Float

为占位持有人张量"占位符"的值

任何建议或参考都会有很大的帮助。

谢谢。

-----------------------------------------------------------------------------------------------------------

根据建议,我们在导出模型时尝试使用" clear_devices = true",但这无济于事。以下是我们使用的代码段。

    for epoch in range(training_epochs):
        epoch_num=0
        batch_count = int(num_img/batch_size)
        count = 0
        for i in range(batch_count):
            epoch_num=0
          # This will create batches out of out Training dataset and it will 
           pe passed to the feed_dict    
            batch_x, batch_y = 
            next_batch(batch_size,epoch_num,train_data,train_labels,num_img)
          # perform the operations we defined earlier on batch
            _, cost, step = sess.run([train_op, cross_entropy, global_step],
            feed_dict={X: batch_x, Y: batch_y})
            sess.run(tf.global_variables_initializer())
            builder = tf.saved_model.builder.SavedModelBuilder(path)
            builder.add_meta_graph_and_variables(
            sess,
            [tf.saved_model.tag_constants.SERVING],
            signature_def_map= {
            "magic_model": 
             tf.saved_model.signature_def_utils.predict_signature_def(
             inputs= {"image": X},
             outputs= {"prediction": preds})
             }, clear_devices=True)
            builder.save()
            sv.stop()
            print("Done!!")

当我们使用clear_devices = true ..

时,我们要低于错误
Error: Traceback (most recent call last):
File "insulator_classifier.py", line 370, in <module>
tf.app.run()
File "/root/anaconda3/lib/python3.6/site-
packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "insulator_classifier.py", line 283, in main
}, clear_devices=False)
File "/root/anaconda3/lib/python3.6/site-
packages/tensorflow/python/saved_model/builder_impl.py", line 364, in 
add_meta_graph_and_variables
allow_empty=True)
File "/root/anaconda3/lib/python3.6/site-
packages/tensorflow/python/training/saver.py", line 1140, in __init__
self.build()
File "/root/anaconda3/lib/python3.6/site-
packages/tensorflow/python/training/saver.py", line 1172, in build
filename=self._filename)
File "/root/anaconda3/lib/python3.6/site-
packages/tensorflow/python/training/saver.py", line 677, in build
filename_tensor = constant_op.constant(filename or "model")
File "/root/anaconda3/lib/python3.6/site-
packages/tensorflow/python/framework/constant_op.py", line 106, in constant
attrs={"value": tensor_value, "dtype": dtype_value}, name=name).outputs[0]
File "/root/anaconda3/lib/python3.6/site-
packages/tensorflow/python/framework/ops.py", line 2582, in create_op
self._check_not_finalized()
File "/root/anaconda3/lib/python3.6/site-
packages/tensorflow/python/framework/ops.py", line 2290, in 
_check_not_finalized
raise RuntimeError("Graph is finalized and cannot be modified.")
RuntimeError: Graph is finalized and cannot be modified.

我们在这里缺少任何东西??

-----------------------------------------------------------------------------------------------------------

进一步更新:

我们可以看到它与建议2)来自@tianjin gu请参阅下面的代码片段

    X= tf.placeholder(tf.float32, shape= [None, img_width, img_height, 3], name= "X")
    Y= tf.placeholder(tf.float32, shape= [None, n_classes], name="Y")
    ....
    ....
    model_exporter = exporter.Exporter(saver)
          model_exporter.init(
     tf.get_default_graph().as_graph_def(),
    named_graph_signatures={
             'inputs': exporter.generic_signature({'input': X}),
             'outputs': exporter.generic_signature({'output': Y})}, clear_devices=True)
          export_path = "/export_path"

当我们出口商时,我们会看到此警告 -

    WARNING:tensorflow:From test_classifier.py:283: Exporter.export (from tensorflow.contrib.session_bundle.exporter) is deprecated and will be removed after 2017-06-30.

因此,我们理想地应该使用" tf.saved_model.builder.savedmodelbuilder" - 但这是由于某种原因而无法工作的。

还有其他建议吗?

谢谢。

注意clear_devices=True;

  1. 如果使用SavedModelBuilder,请在调用add_meta_graph()add_meta_graph_and_variables()

  2. 时设置clear_devices=True
  3. 如果使用exporter,请在构造exporter.Exporter

  4. 时设置clear_devices=True

更新:

  1. 用于使用SavedModelBuilder问题,它不需要创建每个时期的SavedModelBuilder,因此您应该在循环之前移动行,并且也不需要保存每个时期的模型,您可以在循环后移动builder.save()。因此,代码将就像:

    builder = tf.saved_model.builder.SavedModelBuilder(path)
    builder.add_meta_graph_and_variables(sess,
                                         [tf.saved_model.tag_constants.SERVING],
                                         signature_def_map = {"magic_model": tf.saved_model.signature_def_utils.predict_signature_def(inputs= {"image": X}, outputs= {"prediction": preds})},
                                         clear_devices=True)
    for epoch in range(training_epochs):
        epoch_num=0
        batch_count = int(num_img/batch_size)
        count = 0
        for i in range(batch_count):
            epoch_num=0
          # This will create batches out of out Training dataset and it will pe passed to the feed_dict
            batch_x, batch_y =
            next_batch(batch_size,epoch_num,train_data,train_labels,num_img)
          # perform the operations we defined earlier on batch
            _, cost, step = sess.run([train_op, cross_entropy, global_step],
            feed_dict={X: batch_x, Y: batch_y})
            sess.run(tf.global_variables_initializer())
    builder.save()
    sv.stop()
    print("Done!!")
    
  2. 用于使用export.Exporter警告并不重要,您仍然可以通过TensorFlowserving加载文件

最新更新