在kubernetes 1.20中挂载到Persistent-Memory-Backed的本地持久卷失败



我试图使k8s pod能够使用PMEM而不使用privileged mode。我尝试的方法是在fsdax目录上创建一个本地PV,并使用k8s中的PVC,让我的pod使用它。然而,我总是得到MountVolume.NewMounter initialization failed ... : path does not exist错误。

以下是我的yaml文件和PMEM状态:

存储类yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

PV yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
name: pmem-pv-volume
spec:
capacity:
storage: 50Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/pmem0/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- pmem

PVC yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pmem-pv-claim
spec:
storageClassName: local-storage
volumeName: pmem-pv-volume
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

荚yaml:

apiVersion: v1
kind: Pod
metadata:
name: daemon
labels:
env: test
spec:
hostNetwork: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- pmem
containers:
- name: daemon-container
command: ["/usr/bin/bash", "-c", "sleep 3600"]
image: mm:v2
imagePullPolicy: Never
volumeMounts:
- mountPath: /mnt/pmem
name: pmem-pv-storage
- mountPath: /tmp
name: tmp
- mountPath: /var/log/memverge
name: log
- mountPath: /var/memverge/data
name: data
volumes:
- name: pmem-pv-storage
persistentVolumeClaim:
claimName: pmem-pv-claim
- name: tmp
hostPath:
path: /tmp
- name: log
hostPath:
path: /var/log/memverge
- name: data
hostPath:
path: /var/memverge/data

部分状态和k8s输出:

$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    0 745.2G  0 disk
├─sda1        8:1    0     1G  0 part /boot
└─sda2        8:2    0   740G  0 part
├─cl-root 253:0    0   188G  0 lvm  /
├─cl-swap 253:1    0    32G  0 lvm  [SWAP]
└─cl-home 253:2    0   520G  0 lvm  /home
sr0          11:0    1  1024M  0 rom
nvme0n1     259:0    0     7T  0 disk
└─nvme0n1p1 259:1    0     7T  0 part /mnt/nvme
pmem0       259:2    0 100.4G  0 disk /mnt/pmem0
$ kubectl get pv
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS    REASON   AGE
pmem-pv-volume   50Gi       RWO            Delete           Bound    default/pmem-pv-claim   local-storage            20h
$ kubectl get pvc
NAME            STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS    AGE
pmem-pv-claim   Bound    pmem-pv-volume   50Gi       RWO            local-storage   20h
$ kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY   STATUS              RESTARTS   AGE
default       daemon                             0/1     ContainerCreating   0          20h
kube-system   coredns-74ff55c5b-5crgg            1/1     Running             0          20h
kube-system   etcd-minikube                      1/1     Running             0          20h
kube-system   kube-apiserver-minikube            1/1     Running             0          20h
kube-system   kube-controller-manager-minikube   1/1     Running             0          20h
kube-system   kube-proxy-2m7p6                   1/1     Running             0          20h
kube-system   kube-scheduler-minikube            1/1     Running             0          20h
kube-system   storage-provisioner                1/1     Running             0          20h
$ kubectl get events
LAST SEEN   TYPE      REASON        OBJECT       MESSAGE
108s        Warning   FailedMount   pod/daemon   MountVolume.NewMounter initialization failed for volume "pmem-pv-volume" : path "/mnt/pmem0/vol1" does not exist
47m         Warning   FailedMount   pod/daemon   Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[tmp log data default-token-4t8sv pmem-pv-storage]: timed out waiting for the condition
37m         Warning   FailedMount   pod/daemon   Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[default-token-4t8sv pmem-pv-storage tmp log data]: timed out waiting for the condition
13m         Warning   FailedMount   pod/daemon   Unable to attach or mount volumes: unmounted volumes=[pmem-pv-storage], unattached volumes=[pmem-pv-storage tmp log data default-token-4t8sv]: timed out waiting for the condition
$ ls -l /mnt/pmem0
total 20
drwx------ 2 root root 16384 Jan 20 15:35 lost+found
drwxrwxrwx 2 root root  4096 Jan 21 17:56 vol1

它在抱怨path "/mnt/pmem0/vol1" does not exist,但它实际上是存在的:

$ ls -l /mnt/pmem0
total 20
drwx------ 2 root root 16384 Jan 20 15:35 lost+found
drwxrwxrwx 2 root root  4096 Jan 21 17:56 vol1

除了使用local-PV,我还尝试了:

  1. PMEM-CSI。但是PMEM-CSI方法被容器/内核问题阻止了:https://github.com/containerd/containerd/issues/3221

  2. PV。当我尝试创建由PMEM支持的PV时,pod不能正确地声明PMEM存储,但总是作为覆盖fs挂载在主机上的/之上。

有人能帮帮我吗?非常感谢!

正如在注释中讨论的:

使用minikube, rancher和任何其他容器版本的kubelet都会导致MountVolume。NewMounter初始化卷失败,说明此路径存在。

如果kubelet在容器中运行,它无法访问主机同一路径上的文件系统。您必须将hostDir调整为正确的kubelet容器中的路径。

你还可以做的是为本地卷添加绑定,就像github上建议的那样。请根据您的需要调整复制粘贴的示例,如果您将使用它

"HostConfig": {
"Binds": [
"/mnt/local:/mnt/local"
],

常规安装(非容器化),如kubeadm将不会有相同的行为,您将不会收到此类错误。

相关内容

  • 没有找到相关文章

最新更新