如何在使用spark-on-k8s时将evnironment变量注入驱动程序吊舱



我正在k8s上使用GCP Spark编写一个Kubernetes Spark应用程序。

目前,我无法将环境变量注入到我的容器中。

我在这里跟随医生

清单:

apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: spark-search-indexer
namespace: spark-operator
spec:
type: Scala
mode: cluster
image: "gcr.io/spark-operator/spark:v2.4.5"
imagePullPolicy: Always
mainClass: com.quid.indexer.news.jobs.ESIndexingJob
mainApplicationFile: "https://lala.com/baba-0.0.43.jar"
arguments:
- "--esSink"
- "http://something:9200/mo-sn-{yyyy-MM}-v0.0.43/searchable-article"
- "-streaming"
- "--kafkaTopics"
- "annotated_blogs,annotated_ln_news,annotated_news"
- "--kafkaBrokers"
- "10.1.1.1:9092"
sparkVersion: "2.4.5"
restartPolicy:
type: Never
volumes:
- name: "test-volume"
hostPath:
path: "/tmp"
type: Directory
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
env:
- name: "DEMOGRAPHICS_ES_URI"
value: "somevalue"
labels:
version: 2.4.5
volumeMounts:
- name: "test-volume"
mountPath: "/tmp"
executor:
cores: 1
instances: 1
memory: "512m"
env:
- name: "DEMOGRAPHICS_ES_URI"
value: "somevalue"
labels:
version: 2.4.5
volumeMounts:
- name: "test-volume"
mountPath: "/tmp"

pod中设置的环境变量:

Environment:
SPARK_DRIVER_BIND_ADDRESS:   (v1:status.podIP)
SPARK_LOCAL_DIRS:           /var/data/spark-1ed8539d-b157-4fab-9aa6-daff5789bfb5
SPARK_CONF_DIR:             /opt/spark/conf

事实证明,使用这个必须启用webhooks(如何在快速入门指南中设置(

另一种方法可能是使用envVars

示例:

spec:
executor:
envVars:
DEMOGRAPHICS_ES_URI: "somevalue"

参考编号:https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/issues/978

相关内容

  • 没有找到相关文章

最新更新