带有S3FS的Pod卡在终止状态



我需要在kubernetes pod中安装一个s3 bucket。我用这个指南来帮助我。它工作得很好,然而,吊舱被无限期地卡在";终止";当发出删除pod的命令时。我不知道为什么。

这是yaml

apiVersion: v1
kind: Pod
metadata:
name: worker
spec:
volumes:
- name: mntdatas3fs
emptyDir: {}
- name: devfuse
hostPath:
path: /dev/fuse
restartPolicy: Always
containers:
- image: nginx
name: s3-test
securityContext:
privileged: true
volumeMounts:
- name: mntdatas3fs
mountPath: /var/s3fs:shared
- name: s3fs
image: meain/s3-mounter
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
env:
- name: S3_REGION
value: "us-east-1"
- name: S3_BUCKET
value: "xxxxxxx"
- name: AWS_KEY
value: "xxxxxx"
- name: AWS_SECRET_KEY
value: "xxxxxx"
volumeMounts:
- name: devfuse
mountPath: /dev/fuse
- name: mntdatas3fs
mountPath: /var/s3fs:shared

这里是s3fs容器使用的meain/s3挂载器的Dockerfile

FROM alpine:3.3
ENV MNT_POINT /var/s3fs
ARG S3FS_VERSION=v1.86
RUN apk --update --no-cache add fuse alpine-sdk automake autoconf libxml2-dev fuse-dev curl-dev git bash; 
git clone https://github.com/s3fs-fuse/s3fs-fuse.git; 
cd s3fs-fuse; 
git checkout tags/${S3FS_VERSION}; 
./autogen.sh; 
./configure --prefix=/usr; 
make; 
make install; 
make clean; 
rm -rf /var/cache/apk/*; 
apk del git automake autoconf;
RUN mkdir -p "$MNT_POINT"
COPY run.sh run.sh
CMD ./run.sh

在这里运行.sh复制到容器

#!/bin/sh
set -e
echo "$AWS_KEY:$AWS_SECRET_KEY" > passwd && chmod 600 passwd
s3fs "$S3_BUCKET" "$MNT_POINT" -o passwd_file=passwd  && tail -f /dev/null

我在一个非常相似的设置中遇到了这个问题。s3fs将s3装载到/var/s3fs。必须先卸下支架,然后才能愉快地终止吊舱。这是通过:umount /var/s3fs完成的。看见https://manpages.ubuntu.com/manpages/xenial/man1/s3fs.1.html

所以在你的情况下添加

lifecycle:
preStop:
exec:
command: ["sh","-c","umount /var/mnts3fs"]

应该修复它。

相关内容

  • 没有找到相关文章

最新更新