检索并写入TLS CRT kubernetes秘密到Helm模板中的另一个pod



我有一个Kubernetes集群,目前部署了Elasticsearch。

Elasticsearch协调器节点可以通过ClusterIPover HTTPS在服务后面访问。它使用自签名的TLS证书。

我可以检索CA的值:

kubectl get secret 
-n elasticsearch elasticsearch-coordinating-only-crt 
-o jsonpath="{.data.ca.crt}" | base64 -d
-----BEGIN CERTIFICATE-----
MIIDIjCCAgqgAwIBAgIRANkAx51S
...
...

我需要将此作为ca.crt提供给其他应用程序部署。

注意:Elasticsearch部署是一个elasticsearchKubernetes命名空间。新的部署将在不同的命名空间中。

这方面的一个例子是kafka的部署,包括kafka-connect-elasticsearch/sink。接收器连接器使用如下配置:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
data:
connect-standalone-custom.properties: |-
bootstrap.servers={{ include "kafka.fullname" . }}-0.{{ include "kafka.fullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}:{{ .Values.service.port }}
key.converter.schemas.enable=false
value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
plugin.path=/usr/local/share/kafka/plugins
elasticsearch.properties: |-
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=4
topics=syslog,nginx
key.ignore=true
schema.ignore=true
connection.url=https://elasticsearch-coordinating-only.elasticsearch:9200
type.name=kafka-connect
connection.username=elastic
connection.password=xxxxxxxx
elastic.security.protocol=SSL
elastic.https.ssl.truststore.location=/etc/ssl/certs/elasticsearch-ca.crt
elastic.https.ssl.truststore.type=PEM

注意elastic.https.ssl.truststore.location=/etc/ssl/certs/elasticsearch-ca.crt;这就是我需要放入基于kafka的容器中的文件。

使用Helm模板的最佳方法是什么?

目前我有一个https://github.com/bitnami/charts/tree/master/bitnami/kafka的分支。它在templates/下添加了3个新模板:

  • kafka-connect-elasticsearch-configmap.yaml
  • kafka-connect-svc.yaml
  • kafka-connect.yaml

configmap如上所示。kafka-connect.yaml部署如下所示:

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
spec:
replicas: 1
selector:
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: connector
template:
metadata:
labels: {{- include "common.labels.standard" . | nindent 8 }}
app.kubernetes.io/component: connector
spec:
containers:
- name: connect
image: REDACTED.dkr.ecr.REDACTED.amazonaws.com/kafka-connect-elasticsearch
imagePullPolicy: Always
command:
- /bin/bash
- -ec
- bin/connect-standalone.sh custom-config/connect-standalone-custom.properties custom-config/elasticsearch.properties
ports:
- name: connector
containerPort: 8083
volumeMounts:
- name: configuration
mountPath: /opt/bitnami/kafka/custom-config
imagePullSecrets:
- name: regcred
volumes:
- name: configuration
configMap:
name: {{ include "kafka.fullname" . }}-connect

我如何修改这些Kafka Helm图表,以允许它们检索kubectl get secret -n elasticsearch elasticsearch-coordinating-only-crt -o jsonpath="{.data.ca.crt}" | base64 -d的值并将其内容写入/etc/ssl/certs/elasticsearch-ca.crt?

让这个工作,并在此过程中学到了一些东西:

  • 秘密资源驻留在命名空间中。秘密只能被同一个命名空间中的pod引用。(ref)。因此,我切换到使用elasticsearch + kafka的共享命名空间
  • 这个秘密可以以一种直接的方式使用,详见https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets。这不是helm特有的,而是Kubernetes的核心特性

在我的例子中是这样的:

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
spec:
replicas: 1
selector:
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: connector
template:
metadata:
labels: {{- include "common.labels.standard" . | nindent 8 }}
app.kubernetes.io/component: connector
spec:
containers:
- name: connect
image: REDACTED.dkr.ecr.REDACTED.amazonaws.com/kafka-connect-elasticsearch
imagePullPolicy: Always
command:
- /bin/bash
- -ec
- bin/connect-standalone.sh custom-config/connect-standalone-custom.properties custom-config/elasticsearch.properties
ports:
- name: connector
containerPort: 8083
volumeMounts:
- name: configuration
mountPath: /opt/bitnami/kafka/custom-config
- name: ca
mountPath: /etc/ssl/certs
readOnly: true
imagePullSecrets:
- name: regcred
volumes:
- name: configuration
configMap:
name: {{ include "kafka.fullname" . }}-connect
- name: ca
secret:
secretName: elasticsearch-coordinating-only-crt

这会使kafka-connectpod启动并运行,并且我可以验证证书也写在那里:

$ kubectl exec -it -n elasticsearch kafka-connect-c4f4d7dbd-wbxfq 
-- ls -1 /etc/ssl/certs
ca.crt
tls.crt
tls.key

相关内容

  • 没有找到相关文章

最新更新