将 {serve} 元图添加到现有的 Tensorflow 模型中



情况:

我已经创建了几个模型,每个模型经过几天的训练,我们已经准备好从本地测试转移到服务环境。

模型已使用该功能保存

def save_graph_to_file(sess, graph, graph_file_name):
    """Saves an graph to file, creating a valid quantized one if necessary."""
    output_graph_def = graph_util.convert_variables_to_constants(sess, graph.as_graph_def(), [final_tensor_name])
    with gfile.FastGFile(graph_file_name, 'wb') as f:
        f.write(output_graph_def.SerializeToString())

现在,当尝试部署到服务环境(Sagemaker,使用正确的目录结构和文件命名约定(时,系统会返回

2019-06-04 22:38:53.794056: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2019-06-04 22:38:53.798096: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:259] SavedModel load for tags { serve }; Status: fail. Took 83297 microseconds.
2019-06-04 22:38:53.798132: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: model version: 1} failed: Not found: Could not find meta graph def matching supplied tags: { serve }. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`

我所拥有的只是*.pb文件及其标签文本文件。这些在本地环境中的多台计算机上工作得很好。

def load_graph(model_file):
    """
    Code from v1.6.0 of Tensorflow's label_image.py example
    """
    graph = tf.Graph()
    graph_def = tf.GraphDef()
    with open(model_file, "rb") as f:
        graph_def.ParseFromString(f.read())
    with graph.as_default():
        tf.import_graph_def(graph_def)
    return graph
inputLayer = "Mul"
outputLayer = "final_result"
inputName = "import/" + inputLayer
outputName = "import/" + outputLayer
graph = load_graph(modelPath)
inputOperation = graph.get_operation_by_name(inputName)
outputOperation = graph.get_operation_by_name(outputName)
with tf.Session(graph= graph) as sess:
    # ... make a tensor t
    results = sess.run(outputOperation.outputs[0], {
        inputOperation.outputs[0]: t
    })
    # lovely functional results here

我想做的只是获取这些现有文件,添加所需的"serve"标签,然后重新保存它们,但我看到的一切似乎都与从头开始执行此操作有关。

我尝试使用构建器将图形附加到模型中,如下所示:

# Load the graph
graph = load_graph(modelPath)
import shutil
if os.path.exists(exportDir):
    shutil.rmtree(exportDir)
# Add the serving metagraph tag
builder = tf.saved_model.builder.SavedModelBuilder(exportDir)
from tensorflow.saved_model import tag_constants
with tf.Session(graph= graph) as sess:
    builder.add_meta_graph_and_variables(sess, [tag_constants.SERVING, tag_constants.GPU], strip_default_attrs= True)
builder.save()
print("Built a SavedModel")

但得到了同样的错误。

终于解决了。它包含一些 S3 特定的代码和 S3 实例调用(!命令(,但您应该能够将其切片以运行它。

#!python3
"""
Assumes we've defined:
- A directory for our working files to live in, CONTAINER_DIR
- an arbitrary integer VERSION_INT
- We have established local and S3 paths for our model and their labels as variables, particularly `modelLabel` and `modelPath`
"""
# Create a versioned path for the models to live in
# See https://stackoverflow.com/a/54014480/1877527
exportDir = os.path.join(CONTAINER_DIR, VERSION_INT)
if os.path.exists(exportDir):
    shutil.rmtree(exportDir)
os.mkdir(exportDir)
import tensorflow as tf
def load_graph(model_file, returnElements= None):
    """
    Code from v1.6.0 of Tensorflow's label_image.py example
    """
    graph = tf.Graph()
    graph_def = tf.GraphDef()
    with open(model_file, "rb") as f:
        graph_def.ParseFromString(f.read())
    returns = None
    with graph.as_default():
        returns = tf.import_graph_def(graph_def, return_elements= returnElements)
    if returnElements is None:
        return graph
    return graph, returns
# Add the serving metagraph tag
# We need the inputLayerName; in Inception we're feeding the resized tensor
# corresponding to resized_input_tensor_name
# May be able to get away with auto-determining this if not using Inception,
# but for Inception this is the 11th layer
inputLayerName = "Mul:0"
# Load the graph
if inputLayerName is None:
    graph = load_graph(modelPath)
    inputTensor = None
else:
    graph, returns = load_graph(modelPath, returnElements= [inputLayerName])
    inputTensor = returns[0]
with tf.Session(graph= graph) as sess:
    # Read the layers
    try:
        from tensorflow.compat.v1.saved_model import simple_save
    except (ModuleNotFoundError, ImportError):
        from tensorflow.saved_model import simple_save
    with graph.as_default():
        layers = [n.name for n in graph.as_graph_def().node]
        outName = layers.pop() + ":0"
        if inputLayerName is None:
            inputLayerName = layers.pop(0) + ":0"
    print("Checking outlayer", outName)
    outLayer = tf.get_default_graph().get_tensor_by_name(outName)
    if inputTensor is None:
        print("Checking inlayer", inputLayerName)
        inputTensor = tf.get_default_graph().get_tensor_by_name(inputLayerName)
    inputs = {
        inputLayerName: inputTensor
    }
    outputs = {
        outName: outLayer
    }
    simple_save(sess, exportDir, inputs, outputs)
print("Built a SavedModel")
# Put the model label into the artifact dir
modelLabelDest = os.path.join(exportDir, "saved_model.txt")
!cp {modelLabel} {modelLabelDest}
# Prep for serving
import datetime as dt
modelArtifact = f"livemodel_{dt.datetime.now().timestamp()}.tar.gz"
# Copy the version directory here to package
!cp -R {exportDir} ./
# gziptar it
!tar -czvf {modelArtifact} {VERSION_INT}
# Shove it back to S3 for serving
!aws s3 cp {modelArtifact} {bucketPath}
shutil.rmtree(VERSION_INT) # Cleanup
shutil.rmtree(exportDir) # Cleanup

然后,此模型可部署为 Sagemaker 终端节点(以及任何其他 Tensorflow 服务环境(

相关内容

  • 没有找到相关文章

最新更新