我正试图通过Oracle OCI usign Helm图表在Kubernetes集群上安装VerneMQ。
Kubernetes基础设施似乎已经启动并运行,我可以毫无问题地部署我的自定义微服务。
我按照的指示https://github.com/vernemq/docker-vernemq
这里的步骤:
- helm/vernemq目录中的
helm install --name="broker" ./
输出为:
NAME: broker
LAST DEPLOYED: Fri Mar 1 11:07:37 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/RoleBinding
NAME AGE
broker-vernemq 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
broker-vernemq-headless ClusterIP None <none> 4369/TCP 1s
broker-vernemq ClusterIP 10.96.120.32 <none> 1883/TCP 1s
==> v1/StatefulSet
NAME DESIRED CURRENT AGE
broker-vernemq 3 1 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
broker-vernemq-0 0/1 ContainerCreating 0 1s
==> v1/ServiceAccount
NAME SECRETS AGE
broker-vernemq 1 1s
==> v1/Role
NAME AGE
broker-vernemq 1s
NOTES:
1. Check your VerneMQ cluster status:
kubectl exec --namespace default broker-vernemq-0 /usr/sbin/vmq-admin cluster show
2. Get VerneMQ MQTT port
echo "Subscribe/publish MQTT messages there: 127.0.0.1:1883"
kubectl port-forward svc/broker-vernemq 1883:1883
但当我检查时
kubectl exec --namespace default broker-vernemq-0 vmq-admin cluster show
我有
Node 'VerneMQ@broker-vernemq-0..default.svc.cluster.local' not responding to pings.
command terminated with exit code 1
我认为子域(没有任何东西的双点)有问题
这个命令是什么
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns
最后一条日志行是
I0301 10:07:38.366826 1 dns.go:552] Could not find endpoints for service "broker-vernemq-headless" in namespace "default". DNS records will be created once endpoints show up.
我也尝试过这种自定义yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: default
name: vernemq
labels:
app: vernemq
spec:
serviceName: vernemq
replicas: 3
selector:
matchLabels:
app: vernemq
template:
metadata:
labels:
app: vernemq
spec:
containers:
- name: vernemq
image: erlio/docker-vernemq:latest
imagePullPolicy: Always
ports:
- containerPort: 1883
name: mqtt
- containerPort: 8883
name: mqtts
- containerPort: 4369
name: epmd
env:
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
value: "off"
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
value: "/etc/vernemq-passwd/vmq.passwd"
volumeMounts:
- name: vernemq-passwd
mountPath: /etc/vernemq-passwd
readOnly: true
volumes:
- name: vernemq-passwd
secret:
secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: epmd
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: ClusterIP
selector:
app: vernemq
ports:
- port: 1883
name: mqtt
---
apiVersion: v1
kind: Service
metadata:
name: mqtts
labels:
app: mqtts
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- port: 8883
name: mqtts
有什么建议吗?
非常感谢
Jack
这似乎是Docker镜像中的一个bug。github上的建议是构建自己的图像,或者在修复后使用更高版本的VerneMQ图像(1.6.x之后)。
这里提到的建议:https://github.com/vernemq/docker-vernemq/pull/92
请求可能的修复:https://github.com/vernemq/docker-vernemq/pull/97
编辑:
我只是在没有舵的情况下才开始工作。使用kubectl create -f ./cluster.yaml
,使用以下cluster.yaml
:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vernemq
namespace: default
spec:
serviceName: vernemq
replicas: 3
selector:
matchLabels:
app: vernemq
template:
metadata:
labels:
app: vernemq
spec:
serviceAccountName: vernemq
containers:
- name: vernemq
image: erlio/docker-vernemq:latest
ports:
- containerPort: 1883
name: mqttlb
- containerPort: 1883
name: mqtt
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 9100
- containerPort: 9101
- containerPort: 9102
- containerPort: 9103
- containerPort: 9104
- containerPort: 9105
- containerPort: 9106
- containerPort: 9107
- containerPort: 9108
- containerPort: 9109
env:
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
value: "9100"
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
value: "9109"
- name: DOCKER_VERNEMQ_KUBERNETES_INSECURE
value: "1"
# only allow anonymous access for development / testing purposes!
# - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
# value: "on"
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: empd
- port: 44053
name: vmq
---
apiVersion: v1
kind: Service
metadata:
name: mqttlb
labels:
app: mqttlb
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- port: 1883
name: mqttlb
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: NodePort
selector:
app: vernemq
ports:
- port: 1883
name: mqtt
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vernemq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["endpoints", "deployments", "replicasets", "pods"]
verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: endpoint-reader
subjects:
- kind: ServiceAccount
name: vernemq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader
需要几秒钟才能准备好吊舱。
尝试设置环境变量"DOCKER_VERNUMQ_KUBERNETES_APP_LABEL"one_answers"DOCKER_VERNUMQ_6KUBERNETES _NAMESPACE"。这对我很有效。
默认选择器名称为vernemMQ,
您可以使用环境变量DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR
覆盖它,并将值传递为app=name
DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR="app={Name}"
例如:
DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR="app=demo"
参考阅读:
VerneMQ 的Dockerfile