mongoexpress/mongodb-YAML文件适用于minikube,但不适用于我的3节点kubernetes集



我有几个用于mongo express/mongodb的YAML文件,如下所示:

apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: dXNlcm5hbWU=
mongo-root-password: cGFzc3dvcmQ=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom: 
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
app: mongodb
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
selector:
app: mongodb
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom: 
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_SERVER
value: mongodb-service
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
labels:
app: mongo-express
spec:
ports:
- port: 8081
targetPort: 8081
nodePort: 30000
protocol: TCP
type: LoadBalancer
selector:
app: mongo-express

当我执行minikube服务mongo express服务时,我可以在本地minikube集群上应用上述YAML文件。我也可以将其应用于我的3个节点kubernetes集群,但mongoexpressPod似乎无法连接到mongodb服务。

这是我最初的故障排除。

$ kubectl get nodes -o wide
NAME        STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
master-20   Ready    master   35d   v1.19.2   192.168.0.20   <none>        Ubuntu 16.04.7 LTS   4.4.0-186-generic   docker://19.3.13
node1       Ready    <none>   35d   v1.19.2   192.168.0.21   <none>        Ubuntu 16.04.7 LTS   4.4.0-186-generic   docker://19.3.13
node2       Ready    <none>   35d   v1.19.2   192.168.0.22   <none>        Ubuntu 16.04.7 LTS   4.4.0-186-generic   docker://19.3.13
$ kubectl get all -o wide
NAME                                     READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
pod/mongo-express-749445c6c9-wlnx8       1/1     Running   0          18s   10.244.2.23   node2   <none>           <none>
pod/mongodb-deployment-8f6675bc5-w9wks   1/1     Running   0          22s   10.244.1.20   node1   <none>           <none>
NAME                            TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE   SELECTOR
service/kubernetes              ClusterIP      10.96.0.1      <none>        443/TCP          35d   <none>
service/mongo-express-service   LoadBalancer   10.108.20.77   <pending>     8081:30000/TCP   18s   app=mongo-express
service/mongodb-service         ClusterIP      10.98.48.206   <none>        27017/TCP        22s   app=mongodb
NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS      IMAGES          SELECTOR
deployment.apps/mongo-express        1/1     1            1           18s   mongo-express   mongo-express   app=mongo-express
deployment.apps/mongodb-deployment   1/1     1            1           22s   mongodb         mongo           app=mongodb
NAME                                           DESIRED   CURRENT   READY   AGE   CONTAINERS      IMAGES          SELECTOR
replicaset.apps/mongo-express-749445c6c9       1         1         1       18s   mongo-express   mongo-express   app=mongo-express,pod-template-hash=749445c6c9
replicaset.apps/mongodb-deployment-8f6675bc5   1         1         1       22s   mongodb         mongo           app=mongodb,pod-template-hash=8f6675bc5

$ kubectl logs mongo-express-749445c6c9-wlnx8
Waiting for mongodb-service:27017...
/docker-entrypoint.sh: line 14: mongodb-service: Try again
/docker-entrypoint.sh: line 14: /dev/tcp/mongodb-service/27017: Invalid argument
Sun Nov  8 05:29:40 UTC 2020 retrying to connect to mongodb-service:27017 (2/5)

$ kubectl logs mongodb-deployment-8f6675bc5-w9wks
about to fork child process, waiting until server is ready for connections.
forked process: 28
...
MongoDB init process complete; ready for start up.
{"t":{"$date":"2020-11-08T05:28:54.631+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2020-11-08T05:28:54.634+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2020-11-08T05:28:54.634+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2020-11-08T05:28:54.636+00:00"},"s":"I",  "c":"STORAGE",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"mongodb-deployment-8f6675bc5-w9wks"}}
{"t":{"$date":"2020-11-08T05:28:54.636+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.1","gitVersion":"ad91a93a5a31e175f5cbf8c69561e788bbc55ce1","openSSLVersion":"OpenSSL 1.1.1  11 Sep 2018","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu1804","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2020-11-08T05:28:54.636+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"18.04"}}}
{"t":{"$date":"2020-11-08T05:28:54.636+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"},"security":{"authorization":"enabled"}}}}
{"t":{"$date":"2020-11-08T05:28:54.638+00:00"},"s":"I",  "c":"STORAGE",  "id":22270,   "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/data/db","storageEngine":"wiredTiger"}}
{"t":{"$date":"2020-11-08T05:28:54.639+00:00"},"s":"I",  "c":"STORAGE",  "id":22297,   "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
{"t":{"$date":"2020-11-08T05:28:54.639+00:00"},"s":"I",  "c":"STORAGE",  "id":22315,   "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=479M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
{"t":{"$date":"2020-11-08T05:28:56.498+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1604813336:498796][1:0x7f7ffb631a80], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"}}
{"t":{"$date":"2020-11-08T05:28:56.889+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1604813336:889036][1:0x7f7ffb631a80], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"}}
{"t":{"$date":"2020-11-08T05:28:57.525+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1604813337:525554][1:0x7f7ffb631a80], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 1/25728 to 2/256"}}
{"t":{"$date":"2020-11-08T05:28:57.682+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1604813337:682506][1:0x7f7ffb631a80], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"}}
{"t":{"$date":"2020-11-08T05:28:57.791+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1604813337:791351][1:0x7f7ffb631a80], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"}}
{"t":{"$date":"2020-11-08T05:28:57.880+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1604813337:880334][1:0x7f7ffb631a80], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}}
{"t":{"$date":"2020-11-08T05:28:57.880+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1604813337:880542][1:0x7f7ffb631a80], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}}
{"t":{"$date":"2020-11-08T05:28:57.892+00:00"},"s":"I",  "c":"STORAGE",  "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":3253}}
{"t":{"$date":"2020-11-08T05:28:57.893+00:00"},"s":"I",  "c":"RECOVERY", "id":23987,   "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
{"t":{"$date":"2020-11-08T05:28:57.913+00:00"},"s":"I",  "c":"STORAGE",  "id":22262,   "ctx":"initandlisten","msg":"Timestamp monitor starting"}
{"t":{"$date":"2020-11-08T05:28:57.940+00:00"},"s":"I",  "c":"STORAGE",  "id":20536,   "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
{"t":{"$date":"2020-11-08T05:28:57.950+00:00"},"s":"I",  "c":"FTDC",     "id":20625,   "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
{"t":{"$date":"2020-11-08T05:28:57.958+00:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2020-11-08T05:28:57.958+00:00"},"s":"I",  "c":"NETWORK",  "id":23015,   "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
{"t":{"$date":"2020-11-08T05:28:57.958+00:00"},"s":"I",  "c":"NETWORK",  "id":23016,   "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}

正如评论中所建议的,这类问题通常表明coredns和dns解析存在问题。值得一提的是,Kubernetes文档经历了几个很好的dns故障排除步骤:

  • 首先检查本地DNS配置

    kubectl exec -ti dnsutils -- cat /etc/resolv.conf
    
  • 检查DNS pod是否正在运行

    kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
    
  • 检查DNS pod 中的错误

    kubectl logs --namespace=kube-system -l k8s-app=kube-dns
    
  • DNS端点是否公开?

    kubectl get endpoints kube-dns --namespace=kube-system
    

总结OP确认问题与coredns有关,并在/etc/resolve.conf中更改名称服务器解决了问题。

相关内容

最新更新