如何使k8s cpu和内存HPA协同工作



我正在为CPU和内存使用一个k8s HPA模板,如下所示:

---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{.Chart.Name}}-cpu
labels:
app: {{.Chart.Name}}
chart: {{.Chart.Name}}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{.Chart.Name}}
minReplicas: {{.Values.hpa.min}}
maxReplicas: {{.Values.hpa.max}}
targetCPUUtilizationPercentage: {{.Values.hpa.cpu}}
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: {{.Chart.Name}}-mem
labels:
app: {{.Chart.Name}}
chart: {{.Chart.Name}}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{.Chart.Name}}
minReplicas: {{.Values.hpa.min}}
maxReplicas: {{.Values.hpa.max}}
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageValue: {{.Values.hpa.mem}}

具有两个不同的HPA会导致任何为触发内存HPA限制而启动的新pod立即被CPU HPA终止,因为pod的CPU使用率低于CPU的缩减触发器。它总是终止启动的最新pod,使旧pod保持不变,并再次触发内存HPA,导致无限循环。有没有一种方法可以指示CPU HPA每次终止使用率更高的pod,而不是新生的pod?

基于多个度量/自定义度量的自动缩放:-

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: nginx
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Resource
resource:
name: memory
target:
type: AverageValue
averageValue: 100Mi

创建后,Horizontal Pod Autoscaler会监控nginx Deployment的平均CPU利用率、平均内存利用率,以及(如果您取消注释它(自定义packets_per_second度量。Horizontal Pod Autoscaler根据其值将创建更大的自动缩放事件的度量自动缩放部署。

https://cloud.google.com/kubernetes-engine/docs/how-to/horizontal-pod-autoscaling#kubectl-应用

根据评论中的建议,使用单个HPA解决了我的问题。我只需要将CPU HPA移动到与内存HPA相同的apiVersion。

相关内容

  • 没有找到相关文章

最新更新