大家好,我正试图在kubernetes上部署Heartbeat来监控kubernete组件。
我从官方弹性文件中得到了yaml文件。yaml文件
这是完整的配置文件:
apiVersion: apps/v1
kind: Deployment
metadata:
name: heartbeat
namespace: kube-system
labels:
k8s-app: heartbeat
spec:
selector:
matchLabels:
k8s-app: heartbeat
template:
metadata:
labels:
k8s-app: heartbeat
spec:
serviceAccountName: heartbeat
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: heartbeat
image: docker.elastic.co/beats/heartbeat:7.17.6
args: [
"-c", "/etc/heartbeat.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 1536mi
requests:
# for synthetics, 2 full cores is a good starting point for
relatively consistent perform of a single concurrent check
# For lightweight checks as low as 100m is fine
cpu: 2000m
# A high value like this is encouraged for browser based
monitors.
# Lightweight checks use substantially less, even 128Mi is fine
for those.
memory: 1536Mi
volumeMounts:
- name: config
mountPath: /etc/heartbeat.yml
readOnly: true
subPath: heartbeat.yml
- name: data
mountPath: /usr/share/heartbeat/data
volumes:
- name: config
configMap:
defaultMode: 0600
name: heartbeat-deployment-config
- name: data
hostPath:
path: /var/lib/heartbeat-data
type: DirectoryOrCreate
现在它给了我这个错误">创建";heartbeat kubernetes.yaml":部署版本";v1";无法作为部署处理:无法解析数量的后缀"。
当我涂yamlfile时,我得到了
configmap/heartbeat-deployment-config unchanged
clusterrolebinding.rbac.authorization.k8s.io/heartbeat unchanged
rolebinding.rbac.authorization.k8s.io/heartbeat unchanged
rolebinding.rbac.authorization.k8s.io/heartbeat-kubeadm-config
unchanged
clusterrole.rbac.authorization.k8s.io/heartbeat unchanged
role.rbac.authorization.k8s.io/heartbeat unchanged
role.rbac.authorization.k8s.io/heartbeat-kubeadm-config unchanged
serviceaccount/heartbeat unchanged`
除了部署部分外,其他一切都很好。
如有任何帮助,我们将不胜感激。
嘿,我认为您的yaml文件中的缩进有一个错误。你能检查一下这是否适合你吗?
data:
heartbeat.yml: |-
heartbeat.autodiscover:
# Autodiscover pods
providers:
- type: kubernetes
resource: pod
scope: cluster
node: ${NODE_NAME}
hints.enabled: true
providers:
- type: kubernetes
resource: service
scope: cluster
node: ${NODE_NAME}
hints.enabled: true
providers:
- type: kubernetes
resource: node
node: ${NODE_NAME}
scope: cluster
templates:
# Example, check SSH port of all cluster nodes:
- condition: ~
config:
- hosts:
- ${data.host}:22
name: ${data.kubernetes.node.name}
schedule: '@every 10s'
timeout: 5s
type: tcp
processors:
- add_cloud_metadata
output.elasticsearch:
hosts: ['https://10.112.100.121:30883']
username: "elastic"
password: "***********"
ssl.verification_mode: none`
是的,您的部署存在问题。我做了一些更改,但由于一个错误,它对我来说失败了,但只要检查一下这是否适用于你
apiVersion: apps/v1
kind: Deployment
metadata:
name: heartbeat
namespace: kube-system
labels:
k8s-app: heartbeat
spec:
selector:
matchLabels:
k8s-app: heartbeat
template:
metadata:
labels:
k8s-app: heartbeat
spec:
serviceAccountName: heartbeat
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: heartbeat
image: docker.elastic.co/beats/heartbeat:7.17.6
args: [
"-c", "/etc/heartbeat.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: "1300Mi"
cpu: "3000m"
requests:
cpu: "2000m"
memory: "700Mi"
volumeMounts:
- name: config
mountPath: /etc/heartbeat.yml
readOnly: true
subPath: heartbeat.yml
- name: data
mountPath: /usr/share/heartbeat/data
volumes:
- name: config
configMap:
defaultMode: 0600
name: heartbeat-deployment-config
- name: data
hostPath:
path: /var/lib/heartbeat-data
type: DirectoryOrCreate
Can you test if this works for you ?