用Kind和持久卷在kubernetes运行本地开发kafka



我正在运行一台Dev Linux机器,并用Kind在Kubernetes上设置一个本地Kafka进行开发(从docker compose转移到学习和练习pourposes(,一切都很好,但我现在正试图将卷从Kafka和Zookeeper映射到主机,但我只能为Kafka卷进行映射。对于zookeeper,我配置并将数据和日志路径映射到一个卷,但内部目录没有在主机上公开(kafka映射会发生这种情况(,它只显示数据和日志文件夹,但主机上实际上没有内容,因此重新启动zookeepper会重置状态。

我想知道在使用Kind并从不同的pod映射多个目录时,是否有限制或不同的方法,我缺少什么?为什么只有Kafka卷在主机上成功持久化。

完整的安装程序以及如何运行它的自述文件在Github的pv-pvc-setup文件夹下。

动物园管理员有意义的配置,部署:

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: zookeeper
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
service: zookeeper
strategy: {}
template:
metadata:
labels:
network/kafka-network: "true"
service: zookeeper
spec:
containers:
- env:
- name: TZ
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
- name: ZOOKEEPER_DATA_DIR
value: "/var/lib/zookeeper/data"
- name: ZOOKEEPER_LOG_DIR
value: "/var/lib/zookeeper/log"
- name: ZOOKEEPER_SERVER_ID
value: "1"
image: confluentinc/cp-zookeeper:7.0.1
name: zookeeper
ports:
- containerPort: 2181
resources: {}
volumeMounts:
- mountPath: /var/lib/zookeeper
name: zookeeper-data
hostname: zookeeper
restartPolicy: Always
volumes:
- name: zookeeper-data
persistentVolumeClaim:
claimName: zookeeper-pvc

持久卷声明:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: zookeeper-local-storage
resources:
requests:
storage: 5Gi

持久卷:

apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-pv
spec:
accessModes:
- ReadWriteOnce
storageClassName: zookeeper-local-storage
capacity:
storage: 5Gi
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /var/lib/zookeeper

种类配置:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 30092 # internal kafka nodeport
hostPort: 9092 # port exposed on "host" machine for kafka
- containerPort: 30081 # internal schema-registry nodeport
hostPort: 8081 # port exposed on "host" machine for schema-registry
extraMounts:
- hostPath: ./tmp/kafka-data
containerPath: /var/lib/kafka/data
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
- hostPath: ./tmp/zookeeper-data
containerPath: /var/lib/zookeeper
readOnly: false
selinuxRelabel: false
propagation: Bidirectional

正如我提到的设置工作,我现在只是试图确保相关的kafka和zookeeper卷被映射到持久的外部存储(在本例中是本地磁盘(。

我终于把它整理好了。我在最初的设置中有两个主要问题,现在已经解决了。

用于在本地主机上持久化数据的文件夹需要预先创建,因此它们与用于创建初始Kind集群的文件夹具有相同的uid:guid,如果这不到位,则文件夹将无法正确地持久化数据。

从zookeeper(数据和日志(中为每个持久文件夹创建了特定的持久卷和持久卷声明,并在kind-config中配置这些声明。这是最后一种配置:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 30092 # internal kafka nodeport
hostPort: 9092 # port exposed on "host" machine for kafka
- containerPort: 30081 # internal schema-registry nodeport
hostPort: 8081 # port exposed on "host" machine for schema-registry
extraMounts:
- hostPath: ./tmp/kafka-data
containerPath: /var/lib/kafka/data
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
- hostPath: ./tmp/zookeeper-data/data
containerPath: /var/lib/zookeeper/data
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
- hostPath: ./tmp/zookeeper-data/log
containerPath: /var/lib/zookeeper/log
readOnly: false
selinuxRelabel: false
propagation: Bidirectional

使用持久卷和持久卷声明的完整设置在本回购中可用,如果您想运行它以获得乐趣,请参阅进一步的说明。https://github.com/mmaia/kafka-local-kubernetes

最新更新