我已经很好地遵循了这里概述的文档。
我以以下方式设置了我的azure机器学习环境:
from azureml.core import Workspace
# Connect to the workspace
ws = Workspace.from_config()
from azureml.core import Environment
from azureml.core import ContainerRegistry
myenv = Environment(name = "myenv")
myenv.inferencing_stack_version = "latest" # This will install the inference specific apt packages.
# Docker
myenv.docker.enabled = True
myenv.docker.base_image_registry.address = "myazureregistry.azurecr.io"
myenv.docker.base_image_registry.username = "myusername"
myenv.docker.base_image_registry.password = "mypassword"
myenv.docker.base_image = "4fb3..."
myenv.docker.arguments = None
# Environment variable (I need python to look at folders
myenv.environment_variables = {"PYTHONPATH":"/root"}
# python
myenv.python.user_managed_dependencies = True
myenv.python.interpreter_path = "/opt/miniconda/envs/myenv/bin/python"
from azureml.core.conda_dependencies import CondaDependencies
conda_dep = CondaDependencies()
conda_dep.add_pip_package("azureml-defaults")
myenv.python.conda_dependencies=conda_dep
myenv.register(workspace=ws) # works!
我有一个配置用于推理的score.py文件(与我遇到的问题无关(。。。
然后我设置推断配置
from azureml.core.model import InferenceConfig
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
我设置了我的计算集群:
from azureml.core.compute import ComputeTarget, AksCompute
from azureml.exceptions import ComputeTargetException
# Choose a name for your cluster
aks_name = "theclustername"
# Check to see if the cluster already exists
try:
aks_target = ComputeTarget(workspace=ws, name=aks_name)
print('Found existing compute target')
except ComputeTargetException:
print('Creating a new compute target...')
prov_config = AksCompute.provisioning_configuration(vm_size="Standard_NC6_Promo")
aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config)
aks_target.wait_for_completion(show_output=True)
from azureml.core.webservice import AksWebservice
# Example
gpu_aks_config = AksWebservice.deploy_configuration(autoscale_enabled=False,
num_replicas=3,
cpu_cores=4,
memory_gb=10)
一切都成功了;然后我尝试部署用于推理的模型:
from azureml.core.model import Model
model = Model(ws, name="thenameofmymodel")
# Name of the web service that is deployed
aks_service_name = 'tryingtodeply'
# Deploy the model
aks_service = Model.deploy(ws,
aks_service_name,
models=[model],
inference_config=inference_config,
deployment_config=gpu_aks_config,
deployment_target=aks_target,
overwrite=True)
aks_service.wait_for_deployment(show_output=True)
print(aks_service.state)
它没有说它找不到环境。更具体地说,我的环境版本是版本11,但它一直试图找到一个版本号比当前环境高1的环境(即版12(:
FailedERROR - Service deployment polling reached non-successful terminal state, current service state: Failed
Operation ID: 0f03a025-3407-4dc1-9922-a53cc27267d4
More information can be found here:
Error:
{
"code": "BadRequest",
"statusCode": 400,
"message": "The request is invalid",
"details": [
{
"code": "EnvironmentDetailsFetchFailedUserError",
"message": "Failed to fetch details for Environment with Name: myenv Version: 12."
}
]
}
我曾尝试手动编辑环境JSON,以匹配azureml试图获取的版本,但什么都不起作用。有人能看到这个代码有什么错误吗?
更新
更改环境名称(例如my_inference_env
(并将其传递给InferenceConfig
似乎是正确的做法。然而,错误现在更改为以下
Running..........
Failed
ERROR - Service deployment polling reached non-successful terminal state, current service state: Failed
Operation ID: f0dfc13b-6fb6-494b-91a7-de42b9384692
More information can be found here: https://some_long_http_address_that_leads_to_nothing
Error:
{
"code": "DeploymentFailed",
"statusCode": 404,
"message": "Deployment not found"
}
解决方案
下面Anders给出的答案是关于azure ML环境的使用确实是正确的。然而,我得到的最后一个错误是因为我使用摘要值(a sha(而不是图像名称和标签(例如imagename:tag
(来设置容器图像。注意第一个块中的代码行:
myenv.docker.base_image = "4fb3..."
我引用了摘要值,但它应该改为
myenv.docker.base_image = "imagename:tag"
一旦我做出了更改,部署就成功了!:(
我花了一段时间才了解的一个概念是注册和使用Azure MLEnvironment
的分支。若您已经注册了envmyenv
,并且环境的任何详细信息都并没有更改,那个么就不需要向myenv.register()
重新注册它。您可以简单地使用Environment.get()
获取已注册的env,如下所示:
myenv = Environment.get(ws, name='myenv', version=11)
我的建议是给您的环境命名一些新的名称:比如"model_scoring_env"
。将其注册一次,然后将其传递给InferenceConfig
。