如何使用lambda函数预安装的conda内核而不是使用默认内核来执行jupyter notebook ?
目前,我使用以下代码片段从lambda函数执行jupyter notebook。
import time
import boto3
import logging
import requests
import websocket
def lambda_handler(event, context):
sm_client = boto3.client('sagemaker')
notebook_instance_name = 'Test-Instance'
url = sm_client.create_presigned_notebook_instance_url
(NotebookInstanceName=notebook_instance_name)['AuthorizedUrl']
url_tokens = url.split('/')
http_proto = url_tokens[0]
http_hn = url_tokens[2].split('?')[0].split('#')[0]
s = requests.Session()
r = s.get(url)
cookies = "; ".join(key + "=" + value for key, value in s.cookies.items())
ws = websocket.create_connection(
"wss://{}/terminals/websocket/5".format(http_hn),
cookie=cookies,
host=http_hn,
origin=http_proto + "//" + http_hn,
header = [
"User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36"
]
)
ws.send("""[ "stdin", "jupyter nbconvert --execute --to notebook --inplace /home/ec2-user/SageMaker/test.ipynb --ExecutePreprocessor.kernel_name=python3 --ExecutePreprocessor.timeout=1500\r" ]""")
当我从lambda执行jupyter notebook时,它使用默认的python内核,可用于安装了python3.7的Sagemaker实例,但如果我从Sagemaker实例执行我的notebook,则使用conda python3.8
下面是从sagemaker实例执行时可用的内核列表:
R
Sparkmagic (PySpark)
Sparkmagic (Spark)
Sparkmagic (SparkR)
conda_amazonei_mxnet_p36
conda_amazonei_pytorch_latest_p37
conda_amazonei_tensorflow2_p36
conda_mxnet_p37
conda_python3
conda_pytorch_p38
conda_tensorflow2_p38
当我尝试从lambda函数执行它时,只有可用的内核是:
python3
ir
这就是我所尝试的:我使用conda创建了一个自定义内核,并试图在执行lambda函数时使用它,但它不起作用。事实上,如果我在执行from lambda函数时在内核名中给出python3
以外的任何东西,它不会执行笔记本。
查看这个github问题
tl;dr -激活您想要执行的conda环境,然后使用jupyter nbconvert命令。jupyter kernelspec list
应该显示可用的内核(kernel_name=python3
应该在conda_python3
环境中工作)