k8 pod优先级和测试



这些是我的优先级类

NAME                      VALUE        GLOBAL-DEFAULT   AGE
k8-monitoring             1000000      false            4d7h
k8-system                 500000       false            4d7h
k8-user                   1000         false            4d7h

我正在尝试在命名空间pod配额的限制范围内测试优先级,如果方法正确的话,有人能证实我吗。如果没有,请引导我。

apiVersion: v1
kind: Namespace
metadata:
name: priority-test
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: priority-pod-quota
namespace: priority-test
spec:
hard:
pods: "5"
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: user-priority
namespace: priority-test
labels:
tier: x3
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: x3
template:
metadata:
labels:
tier: x3
spec:
priorityClassName: k8-user
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: system-priority
namespace: priority-test
labels:
tier: x2
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: x2
template:
metadata:
labels:
tier: x2
spec:
priorityClassName: k8-system
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: monitoring-priority
namespace: priority-test
labels:
tier: x1
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: x1
template:
metadata:
labels:
tier: x1
spec:
priorityClassName: monitoring-priority
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3

我在EKS v.1.15中运行此测试,但没有获得按设计解释的优先级。一些告诉我,如果我需要另一只眼睛看它

不应该看到这一点,高优先级应该是运行

NAME                  DESIRED   CURRENT   READY   AGE
monitoring-priority   3         0         0       17m
system-priority       3         2         2       17m
user-priority         3         3         3       17m

我还阅读了Dawid Kruk K8s pod priority&outOfPods

您已经将具有5个pod的ResourceQuota定义为hard需求。这个ResourceQuota应用于所有pod的命名空间级别,而不考虑它们的优先级类别。这就是为什么您在user-priority中看到3个pod作为current,在system-priority中看到2个pod在current中。由于ResourceQuota中定义的5个pod的限制,其余pod无法运行。如果您检查kubectl get events,您应该会看到与资源配额相关的403 FORBIDDEN错误。

ResourceQuota是一个准入控制器,当达到配额时,它根本不会让pod进入调度队列,这就是现在发生的情况。因此,您需要增加ResourceQuota配额,以便继续测试pod优先级和抢占。

测试pod优先级和抢占的正确方法是部署足够的pod以达到节点资源容量,并验证低优先级pod是否被驱逐以调度高优先级pod。

最新更新