从普罗米修斯的舵表中发出警报



我正试图在Kubernetes上的Prometheus中创建警报,并将其发送到Slack频道。为此,我使用了prometheus社区舵图(其中已经包括alertmanager(。由于我想使用我自己的警报,我还创建了一个值。yml(如下所示(深受这里的启发。如果我向普罗米修斯前进,我可以看到我的警报从非活动状态变为等待状态,再到开火状态,但没有任何消息发送到松懈状态。我非常确信我的alertmanager配置很好(因为我已经用另一个图表的一些预构建警报进行了测试,它们被发送到了slack(。因此,我的最佳猜测是,我以错误的方式添加了警报(在serverFiles部分(,但我不知道如何正确添加。此外,alertmanager日志在我看来很正常。有人知道我的问题是从哪里来的吗?

---
serverFiles:
alerting_rules.yml: 
groups:
- name: example
rules:
- alert: HighRequestLatency
expr: sum(rate(container_network_receive_bytes_total{namespace="kube-logging"}[5m]))>20000
for: 1m
labels:
severity: page
annotations:
summary: High request latency
alertmanager:
persistentVolume:
storageClass: default-hdd-retain
## Deploy alertmanager
##
enabled: true
## Service account for Alertmanager to use.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
create: true
name: ""
## Configure pod disruption budgets for Alertmanager
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
## This configuration is immutable once created and will require the PDB to be deleted to be changed
## https://github.com/kubernetes/kubernetes/issues/45398
##
podDisruptionBudget:
enabled: false
minAvailable: 1
maxUnavailable: ""
## Alertmanager configuration directives
## ref: https://prometheus.io/docs/alerting/configuration/#configuration-file
##      https://prometheus.io/webtools/alerting/routing-tree-editor/
##
config:
global:
resolve_timeout: 5m
slack_api_url: "I changed this url for the stack overflow question"
route:
group_by: ['job']
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
#receiver: 'slack'
routes:
- match:
alertname: DeadMansSwitch
receiver: 'null'
- match:
receiver: 'slack'
continue: true
receivers:
- name: 'null'
- name: 'slack'
slack_configs:
- channel: 'alerts'
send_resolved: false
title: '[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] Monitoring Event Notification'
text: >-
{{ range .Alerts }}
*Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`
*Description:* {{ .Annotations.description }}
*Graph:* <{{ .GeneratorURL }}|:chart_with_upwards_trend:> *Runbook:* <{{ .Annotations.runbook }}|:spiral_note_pad:>
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}

所以我终于解决了这个问题。问题显然是kube prometheus堆栈和prometheus舵图的工作方式有点不同。因此,我不得不在alertmanagerFiles.alertmanager.yml.中插入代码(从全局开始(,而不是alertmanager.config

最新更新