如何为azure ml创建score.py文件



我是Azure ML的新手,并试图将我的模型部署到Azure中。我训练的模型是一个分类模型,其中文本数据首先被处理,然后使用BERT模型编码,然后使用catBoost进行训练。我已经注册了我的模型;然而,我对score .py脚本有点困惑。这是我使用的,但不工作:

import json
import joblib
import numpy as np
import os
# Called when the service is loaded
def init():
global model
# Get the path to the registered model file and load it
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'nlp_cla.pkl')
model = joblib.load(model_path)
# Called when a request is received
def run(raw_data):
# Get the input data as a numpy array
data = np.array(json.loads(raw_data)['data'])
# Get a prediction from the model
predictions = model.predict(data)
# Return the predictions as any JSON serializable format
return predictions.tolist()

我如何配置我的这个脚本,以便我可以部署在azure上?

您可以从visual studio代码开始调试,如图所示,并使用score.py部署示例。

%%writefile source_directory/x/y/score.py
import joblib
import json
import numpy as np
import os
from inference_schema.schema_decorators import input_schema, output_schema
from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType
def init():
global model
# AZUREML_MODEL_DIR is an environment variable created during deployment. Join this path with the filename of the model file.
# It holds the path to the directory that contains the deployed model (./azureml-models/$MODEL_NAME/$VERSION)
# If there are multiple models, this value is the path to the directory containing all deployed models (./azureml-models)
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')
# Deserialize the model file back into a sklearn model.
model = joblib.load(model_path)
global name
# Note here, the entire source directory from inference config gets added into image.
# Below is an example of how you can use any extra files in image.
with open('./source_directory/extradata.json') as json_file:
data = json.load(json_file)
name = data["people"][0]["name"]
input_sample = np.array([[10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]])
output_sample = np.array([3726.995])
@input_schema('data', NumpyParameterType(input_sample))
@output_schema(NumpyParameterType(output_sample))
def run(data):
try:
result = model.predict(data)
# You can return any JSON-serializable object.
return "Hello " + name + " here is your result = " + str(result)
except Exception as e:
error = str(e)
return error

最新更新