我有一个经过训练的Tensorflow模型,它使用两个输入来进行预测。我已经在 AWS Sagemaker 上成功设置并部署了该模型。
from sagemaker.tensorflow.model import TensorFlowModel
sagemaker_model = TensorFlowModel(model_data='s3://' + sagemaker_session.default_bucket()
+ '/R2-model/R2-model.tar.gz',
role = role,
framework_version = '1.12',
py_version='py2',
entry_point='train.py')
predictor = sagemaker_model.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
predictor.predict([data_scaled_1.to_csv(),
data_scaled_2.to_csv()]
)
我总是收到错误。我可以使用 AWS Lambda 函数,但我没有看到任何有关为已部署模型指定多个输入的文档。有谁知道如何做到这一点?
首先部署模型时,您需要实际构建正确的签名。此外,您需要使用张量流服务进行部署。
在推理时,你还需要在请求时给出适当的输入格式:基本上 sagemaker docker 服务器接受请求输入并将其传递给 tensorflow 服务。因此,输入需要与 TF 服务输入匹配。
下面是一个使用 Sagemaker 在 Tensorflow 服务中部署 Keras 多输入多输出模型的简单示例,以及如何在之后进行推理:
import tarfile
from tensorflow.python.saved_model import builder
from tensorflow.python.saved_model.signature_def_utils import predict_signature_def
from tensorflow.python.saved_model import tag_constants
from keras import backend as K
import sagemaker
#nano ~/.aws/config
#get_ipython().system('nano ~/.aws/config')
from sagemaker import get_execution_role
from sagemaker.tensorflow.serving import Model
def serialize_to_tf_and_dump(model, export_path):
"""
serialize a Keras model to TF model
:param model: compiled Keras model
:param export_path: str, The export path contains the name and the version of the model
:return:
"""
# Build the Protocol Buffer SavedModel at 'export_path'
save_model_builder = builder.SavedModelBuilder(export_path)
# Create prediction signature to be used by TensorFlow Serving Predict API
signature = predict_signature_def(
inputs={
"input_type_1": model.input[0],
"input_type_2": model.input[1],
},
outputs={
"decision_output_1": model.output[0],
"decision_output_2": model.output[1],
"decision_output_3": model.output[2]
}
)
with K.get_session() as sess:
# Save the meta graph and variables
save_model_builder.add_meta_graph_and_variables(
sess=sess, tags=[tag_constants.SERVING], signature_def_map={"serving_default": signature})
save_model_builder.save()
# instanciate model
model = ....
# convert to tf model
serialize_to_tf_and_dump(model, 'model_folder/1')
# tar tf model
with tarfile.open('model.tar.gz', mode='w:gz') as archive:
archive.add('model_folder', recursive=True)
# upload it to s3
sagemaker_session = sagemaker.Session()
inputs = sagemaker_session.upload_data(path='model.tar.gz')
# convert to sagemaker model
role = get_execution_role()
sagemaker_model = Model(model_data = inputs,
name='DummyModel',
role = role,
framework_version = '1.12')
predictor = sagemaker_model.deploy(initial_instance_count=1,
instance_type='ml.t2.medium', endpoint_name='MultiInputMultiOutputModel')
在推理时,以下是请求预测的方法:
import json
import boto3
x_inputs = ... # list with 2 np arrays of size (batch_size, ...)
data={
'inputs':{
"input_type_1": x[0].tolist(),
"input_type_2": x[1].tolist()
}
}
endpoint_name = 'MultiInputMultiOutputModel'
client = boto3.client('runtime.sagemaker')
response = client.invoke_endpoint(EndpointName=endpoint_name, Body=json.dumps(data), ContentType='application/json')
predictions = json.loads(response['Body'].read())
您可能需要自定义终端节点中加载的推理函数。在 SageMaker TF SDK 文档中,您可以找到 SageMaker TensorFlow 部署有两个选项:
- Python 端点,这是默认值,请检查是否修改
input_fn
可以适应您的推理方案 - TF 服务端点
您可以在 Cloudwatch 中诊断错误(可通过 sagemaker 终端节点 UI 访问(,在上述两个架构中选择最合适的服务架构,并在需要时自定义推理函数
只有 TF 服务终结点支持在一个推理请求中有多个输入。可以按照此处的文档部署 TFS 终结点 -https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst