在Dag定义中访问命令行参数的气流



我试图访问通过Dag定义中的rest API传递给Dag的参数,如下所示,我正在传递config_path和s3_bucket作为rest API中的参数,并希望在自定义SparkLivyOperator中捕获它们。SparkLivyOperator读取所有参数并在EMR上启动Spark作业。我试着阅读这些参数如下,但我没有得到任何值。

下面是我的curl命令:
curl -X POST 
http://localhost:8080/api/experimental/dags/**spark_launcher**/dag_runs 
-H 'Cache-Control: no-cache' 
-H 'Content-Type: application/json' 
-d '{"conf":"{"s3_bucket":"--s3_bucket s3://test_bucket/", "config_path":"--config_path this_is_conf"}"}'
config_path='{{ dag_run.conf["config_path"] }}',
s3_bucket='{{ dag_run.conf["s3_bucket"] }}

下面是我的Dag定义:

import os
from datetime import datetime
from airflow import DAG
from EmrManagerOperator import EmrManagerOperator
DEFAULT_ARGS = {
'owner': 'hadoop',
'depends_on_past': False,
'start_date': datetime(2021, 8, 1, 0),
'email_on_failure': True,
'email_on_retry': False,
'schedule_interval': None,
'retries': 2
}
DOCKER_RELEASE_VERSION = '0.1.0-38'  # this python3 emr-5.23.0 release
STAGE = os.environ['STAGE']
ENV = STAGE.lower()
REGION = 'na'
COUNTRIES = 'US'
DAG_UUID = 'TestDag'
# EMR EC2 instance related variables
EMR_RELEASE_VERSION = 'emr-5.29.0'
EC2_INSTANCE_TYPE = 'r4.4xlarge'
EC2_INSTANCE_COUNT = '3'
EC2_INSTANCE_VOLUME_SIZE = '500'
DOCKER_IMAGE = '833176741232.dkr.ecr.us-east-1.amazonaws.com/emr-manager:' + DOCKER_RELEASE_VERSION
BOOTSTRAP_SCRIPT = 's3://bucket/scripts/install-basic-python-aws-cli-libs.sh'

def get_emr_id(context):
"""
Get EMR cluster ID from context
:param context: context of instance of task
:return: emr_id: str  cluster_id of emr cluster
"""
emr_info = context['task_instance'].xcom_pull(task_ids='emr-create-{dag_id}'.format(dag_id=DAG_UUID))
print(emr_info)
return emr_info["cluster_id"]

def get_emr_dns(context):
"""
Function to get emr dns
:param context: airflow context
:return: str: emr dns
"""
emr_info = context['task_instance'].xcom_pull(task_ids='emr-create-{dag_id}'.format(dag_id=DAG_UUID))
print(emr_info)
return emr_info["emr_master_dns"]

with DAG(dag_id=DAG_UUID, default_args=DEFAULT_ARGS, schedule_interval=None, max_active_runs=10) as dag:
emr_manager_create_task = EmrManagerOperator(
dag=dag,
job_name='emr-create-{dag_id}'.format(dag_id=DAG_UUID),
region=REGION,
image=DOCKER_IMAGE,
emr_action='create_emr',
emr_instance_profile="EMR-InstanceRole",
emr_cluster_name="emr-on-demand-cluster-na",
emr_release_label=EMR_RELEASE_VERSION,
emr_node_instance_type=EC2_INSTANCE_TYPE,
emr_master_instance_type=EC2_INSTANCE_TYPE,
emr_bootstrap_script_path=BOOTSTRAP_SCRIPT,
emr_node_volume_size=EC2_INSTANCE_VOLUME_SIZE,
emr_node_on_demand_count=EC2_INSTANCE_COUNT,
project='di2-etl',
env=ENV)
segment_release_task = SparkLivyOperator(
dag=dag,
jobName=DAG_UUID,
task_id='livy_operator_task.' + DAG_UUID,
region=REGION,
get_emr_dns=get_emr_dns,
env=ENV,
stage=STAGE,
**config_path='{{ dag_run.conf["config_path"] }}',
s3_bucket='{{ dag_run.conf["s3_bucket"] }}'**
)
emr_manager_delete_task = EmrManagerOperator(
dag=dag,
job_name='emr-delete-{dag_id}'.format(dag_id=DAG_UUID),
region=REGION,
image=DOCKER_IMAGE,
emr_action='delete_emr',
get_emr_id=get_emr_id,
trigger_rule="all_done",
env=ENV
)
emr_manager_create_task >> segment_release_task >> emr_manager_delete_task

我相信您没有将参数标记为" templated"参数。在定义自定义操作符时,可以将字段添加到template_fields"静态字段:

template_fields = ['s3_bucket', 'config_path']

只有添加到template_fields中的字段才会被JINJA模板处理。

最新更新