我试图将mongodb
作为statefulset
在minikube
Kubernetes集群中运行。我有3个副本,但我有以下问题-也就是说,一个副本(蒙古-0)启动并运行没有任何问题,但第二个副本(蒙古-1)永远处于pending
状态。我尝试描述pod,得到以下输出:
kubectl describe pod mongo-1 -n ng-mongo
. . .
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 17m (x70 over 6h9m) default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
根据上面的错误,它说它找不到持久卷,但是已经有一个了。
请找到我的YAML定义:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
namespace: ng-mongo
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
namespace: ng-mongo
spec:
capacity:
storage: 10Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /tmp
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-claim
namespace: ng-mongo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-storage
---
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: ng-mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
namespace: ng-mongo
spec:
serviceName: "mongo"
replicas: 3
selector:
matchLabels:
role: mongo
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- rs0
resources:
requests:
cpu: 0.2
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: localvolume
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
# volumes:
# - name: localvolume
# persistentVolumeClaim:
# claimName: local-claim
volumeClaimTemplates:
- metadata:
name: localvolume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 2Gi
谁能帮我找到这里的问题?
您在创建PV时正在使用节点关联,需要正确配置:
它将通知Kubernetes我的磁盘将附加到这种类型的节点。由于亲和性,您的PV仅连接到一种类型的特定节点。
当你部署部署时,它没有被安排在特定的节点上,你的POD没有得到PV或PVC。
如果将节点关联添加到PVC,也将其添加到部署中。所以PVC和pod都被安排在同一个节点上。
解决步骤:
确保部署和pvc计划使用相同的节点,并将节点关联添加到部署中,以便在各自的节点上部署计划。
或者
从PV中删除节点关联规则,并创建一个新的PV和PVC并使用它。这里是您提到节点关联规则
的地方注意:在您提到的作为minikube的节点关联中,通过以下方式验证节点如果需要,kubectl get nodes
可以进行更改。