尝试分配到pi上的本地存储时,找不到要绑定的持久卷



它在我的mack8s实例中有效,但在我的树莓pi实例中无效。本质上,我试图建立一个pihole的k8s云实现。这样,我就可以监视它,并保持它的容器化,而不是在应用程序范围之外运行。理想情况下,我试着把所有东西都装进集装箱以保持清洁。

我在一个2节点树莓派4,4G/ea集群上运行。

当在我的mac上运行以下文件时,它会正确构建,但在名为:master pi的pi上,它会失败:

Events:
Type     Reason            Age   From               Message
----     ------            ----  ----               -------
Warning  FailedScheduling  44m   default-scheduler  0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
Warning  FailedScheduling  44m   default-scheduler  0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.

我实现的YAML看起来很简单:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pihole-local-etc-volume
labels:
directory: etc
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local
local:
path: /home/pi/Documents/pihole/etc #Location where it will live.
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master-pi #docker-desktop # Hosthome where lives.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pihole-local-etc-claim
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi # Possibly update to 2Gi later.
selector:
matchLabels:
directory: etc
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pihole-local-dnsmasq-volume
labels:
directory: dnsmasq.d
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local
local:
path: /home/pi/Documents/pihole/dnsmasq #Location where it will live.
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master-pi #docker-desktop # Hosthome where lives.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pihole-local-dnsmasq-claim
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
selector:
matchLabels:
directory: dnsmasq.d
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pihole
labels:
app: pihole
spec:
replicas: 1
selector:
matchLabels:
app: pihole
template:
metadata:
labels:
app: pihole
name: pihole
spec:
containers:
- name: pihole
image: pihole/pihole:latest
imagePullPolicy: Always
env:
- name: TZ
value: "America/New_York"
- name: WEBPASSWORD
value: "secret"
volumeMounts:
- name: pihole-local-etc-volume
mountPath: "/etc/pihole"
- name: pihole-local-dnsmasq-volume
mountPath: "/etc/dnsmasq.d"
volumes:
- name: pihole-local-etc-volume
persistentVolumeClaim:
claimName: pihole-local-etc-claim
- name: pihole-local-dnsmasq-volume
persistentVolumeClaim:
claimName: pihole-local-dnsmasq-claim
---
apiVersion: v1
kind: Service
metadata:
name: pihole
spec:
selector:
app: pihole
ports:
- port: 8000
targetPort: 80
name: pihole-admin
- port: 53
targetPort: 53
protocol: TCP
name: dns-tcp
- port: 53
targetPort: 53
protocol: UDP
name: dns-udp
externalIPs:
- 192.168.10.75 #Static IP I need to assign for the network.

其他注意事项:我确保我之前创建了文件夹,它们都是chmod777。df产生:

pi@master-pi:~/Documents/pihole$ df
Filesystem     1K-blocks     Used Available Use% Mounted on
tmpfs             383100     5772    377328   2% /run
/dev/mmcblk0p2  30450144 14283040  14832268  50% /
tmpfs            1915492        0   1915492   0% /dev/shm
tmpfs               5120        4      5116   1% /run/lock
tmpfs               4096        0      4096   0% /sys/fs/cgroup
/dev/mmcblk0p1    258095   147696    110399  58% /boot/firmware
tmpfs             383096      116    382980   1% /run/user/1000

所以我相信这个位置的大小(/home/pi/Documents/etc(只有1G,但看起来有一半满,所以大约15G可用。

我可以提供更多信息,但我只是困惑于为什么这个

这里有两件事需要学习。

  1. 主节点不会获得计划的pod。他们有足够的事情要做,只是组织。也就是说,由于节点集群既是主集群又是从集群,其中2或更多,1是主集群,其余是从集群。

  2. 在这种情况下,当为卷分配路径/hello/world时,它不会在主机上自动生成路径,这实际上非常令人讨厌,因为如果您有N个pod,则需要所有节点都有该路径,以防它被调度到不同的路径。主节点决定了事情的发展方向,所以如果它将它传递给一个无法处理它的节点,它将得到一个回退错误。然后最好将路径放在所有节点上。

关键是集群(主集群或其他集群(应该自动生成节点路径,但这不是真的。人们会认为,既然它有sudo,它应该能够说";在此安装此";,但事实并非如此。我需要手动配置每个节点以使用路径,这会产生配置错误。

如果我需要启动更多的节点,我需要确保它们都得到相应的配置,比如添加这个特定的路径。你需要将其添加到你自己的设置例程中。

您可以在此处阅读有关卷的hostPath的更多信息:https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume

网站表示hostPath适用于单节点集群,但在处理Production或>1节点,您应该使用NFS或其他一些存储机制。

另外一个好处是使用存储类进行自动资源调配,这就是我个人最初想要的原因:https://kubernetes.io/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes/

它讨论了如何定义存储类,以及如何请求30gi的存储大小。这将与索赔一起使用。现在已经太晚了,但我将尝试为基本问题写一个类似的例子。

最新更新