我有一个kubernetes集群,其中有一个zookeeper pod和三个kafkabroker pod。
zk的部署描述符为:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper
spec:
replicas: 1
template:
metadata:
labels:
name: zookeeper
spec:
containers:
- env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zookeeper
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
- name: ZOOKEEPER_TICK_TIME
value: "2000"
name: zookeeper
image: confluentinc/cp-zookeeper:5.0.1
ports:
- containerPort: 2181
volumeMounts:
- mountPath: /var/lib/zookeeper/
name: zookeeper-data
nodeSelector:
noderole: kafka1
restartPolicy: Always
volumes:
- name: zookeeper-data
persistentVolumeClaim:
claimName: zookeeper-volume-claims
对于kafka,经纪人如下所示(每个经纪人有一个相应的经纪人名称、监听器和持久数量声明):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka1
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: kafka1
spec:
containers:
- env:
- name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
value: "true"
- name: KAFKA_ADVERTISED_LISTENERS
value: "PLAINTEXT://<ip>:9092"
- name: KAFKA_LISTENERS
value: "PLAINTEXT://0.0.0.0:9092"
- name: KAFKA_ZOOKEEPER_CONNECT
value: <ip>:2181
- name: KAFKA_BROKER_ID
value: "1"
name: kafka1
image: confluentinc/cp-enterprise-kafka:5.0.1
ports:
- containerPort: 9092
volumeMounts:
- mountPath: /var/lib/kafka
name: kafka1-data
nodeSelector:
noderole: kafka2
restartPolicy: Always
volumes:
- name: kafka1-data
persistentVolumeClaim:
claimName: kafka1-volume-claim
集群已经启动并运行,我可以创建主题、发布和使用消息。
文件log.1存在于/var/lib/zookeeper/log/version-2 中
-rw-r--r-- 1 root root 67108880 Jan 18 11:34 log.1
如果我遇到一个经纪人:
kubectl exec -it kafka3-97454b745-wddpv bash
我可以看到主题的两个分区:
drwxr-xr-x 2 root root 4096 Jan 21 10:34 test1-1
drwxr-xr-x 2 root root 4096 Jan 21 10:35 test1-0
当我在分配动物园管理员蚂蚁代理时重新启动虚拟机时,问题就出现了。一个用于zk,一个用于每个代理(三个符合我的Kubernetes集群的vm)
重启后,在每个经纪人中,都没有主题:
root@kafka3-97454b745-wddpv:/var/lib/kafka/data# ls -lrt
total 24
-rw-r--r-- 1 root root 0 Jan 21 10:56 cleaner-offset-checkpoint
-rw-r--r-- 1 root root 54 Jan 21 10:56 meta.properties
drwxr-xr-x 2 root root 4096 Jan 21 10:56 __confluent.support.metrics-0
drwxr-xr-x 2 root root 4096 Jan 21 10:56 _schemas-0
-rw-r--r-- 1 root root 49 Jan 21 11:10 recovery-point-offset-checkpoint
-rw-r--r-- 1 root root 4 Jan 21 11:10 log-start-offset-checkpoint
-rw-r--r-- 1 root root 49 Jan 21 11:11 replication-offset-checkpoint
在动物园管理员:
root@zookeeper-84bb68d45b-cklwm:/var/lib/zookeeper/log/version-2# ls -lrt
total 16
-rw-r--r-- 1 root root 67108880 Jan 21 10:56 log.1
如果我列出这些主题,它们就不见了。
Kubernetes集群正在Azure上运行。
我认为不存在与持久卷相关的问题,因为当我在其中手动创建文件时,在重新启动后,该文件仍然存在。我认为这与我的卡夫卡配置有关。正如你所看到的,我正在使用融合的docker图像。
如有任何帮助,我们将不胜感激。
这只是装载路径上的错误配置。路径必须指向数据和事务日志文件夹,而不是父文件夹。