我正在尝试将指标部署到 kubernetes,发生了一些非常奇怪的事情,我有一个工人和一个主人。我有以下豆荚列表:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default php-apache-774ff9d754-d7vp9 1/1 Running 0 2m43s 192.168.77.172 master-node <none> <none>
kube-system calico-kube-controllers-6b9d4c8765-x7pql 1/1 Running 2 4h11m 192.168.77.130 master-node <none> <none>
kube-system calico-node-d4rnh 0/1 Running 1 4h11m 10.221.194.166 master-node <none> <none>
kube-system calico-node-hwkmd 0/1 Running 1 4h11m 10.221.195.58 free5gc-virtual-machine <none> <none>
kube-system coredns-6955765f44-kf4dr 1/1 Running 1 4h20m 192.168.178.65 free5gc-virtual-machine <none> <none>
kube-system coredns-6955765f44-s58rf 1/1 Running 1 4h20m 192.168.178.66 free5gc-virtual-machine <none> <none>
kube-system etcd-free5gc-virtual-machine 1/1 Running 1 4h21m 10.221.195.58 free5gc-virtual-machine <none> <none>
kube-system kube-apiserver-free5gc-virtual-machine 1/1 Running 1 4h21m 10.221.195.58 free5gc-virtual-machine <none> <none>
kube-system kube-controller-manager-free5gc-virtual-machine 1/1 Running 1 4h21m 10.221.195.58 free5gc-virtual-machine <none> <none>
kube-system kube-proxy-brvdg 1/1 Running 1 4h19m 10.221.194.166 master-node <none> <none>
kube-system kube-proxy-lfzjw 1/1 Running 1 4h20m 10.221.195.58 free5gc-virtual-machine <none> <none>
kube-system kube-scheduler-free5gc-virtual-machine 1/1 Running 1 4h21m 10.221.195.58 free5gc-virtual-machine <none> <none>
kube-system metrics-server-86c6d8b9bf-p2hh8 1/1 Running 0 2m43s 192.168.77.171 master-node <none> <none>
当我尝试获取指标时,我看到以下内容:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache <unknown>/50% 1 10 1 3m58s
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl top pods --all-namespaces
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
最后,我看到指标服务器的输出日志 (v=6(:
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl logs metrics-server-86c6d8b9bf-p2hh8 -n kube-system
I0206 18:16:18.657605 1 serving.go:273] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0206 18:16:19.367356 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 7 milliseconds
I0206 18:16:19.370573 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0206 18:16:19.373245 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0206 18:16:19.375024 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
[restful] 2020/02/06 18:16:19 log.go:33: [restful/swagger] listing is available at https://:4443/swaggerapi
[restful] 2020/02/06 18:16:19 log.go:33: [restful/swagger] https://:4443/swaggerui/ is mapped to folder /swagger-ui/
I0206 18:16:19.421207 1 healthz.go:83] Installing healthz checkers:"ping", "poststarthook/generic-apiserver-start-informers", "healthz"
I0206 18:16:19.421641 1 serve.go:96] Serving securely on [::]:4443
I0206 18:16:19.421873 1 reflector.go:202] Starting reflector *v1.Pod (0s) from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421891 1 reflector.go:240] Listing and watching *v1.Pod from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421914 1 reflector.go:202] Starting reflector *v1.Node (0s) from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421929 1 reflector.go:240] Listing and watching *v1.Node from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.423052 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0 200 OK in 1 milliseconds
I0206 18:16:19.424261 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0 200 OK in 2 milliseconds
I0206 18:16:19.425586 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/nodes?resourceVersion=38924&timeoutSeconds=481&watch=true 200 OK in 0 milliseconds
I0206 18:16:19.433545 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/pods?resourceVersion=39246&timeoutSeconds=582&watch=true 200 OK in 0 milliseconds
I0206 18:16:49.388514 1 manager.go:99] Beginning cycle, collecting metrics...
I0206 18:16:49.388598 1 manager.go:95] Scraping metrics from 2 sources
I0206 18:16:49.395742 1 manager.go:120] Querying source: kubelet_summary:free5gc-virtual-machine
I0206 18:16:49.400574 1 manager.go:120] Querying source: kubelet_summary:master-node
I0206 18:16:49.413751 1 round_trippers.go:405] GET https://10.221.194.166:10250/stats/summary/ 200 OK in 13 milliseconds
I0206 18:16:49.414317 1 round_trippers.go:405] GET https://10.221.195.58:10250/stats/summary/ 200 OK in 18 milliseconds
I0206 18:16:49.417044 1 manager.go:150] ScrapeMetrics: time: 28.428677ms, nodes: 2, pods: 13
I0206 18:16:49.417062 1 manager.go:115] ...Storing metrics...
I0206 18:16:49.417083 1 manager.go:126] ...Cycle complete
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl logs metrics-server-86c6d8b9bf-p2hh8 -n kube-system
I0206 18:16:18.657605 1 serving.go:273] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0206 18:16:19.367356 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 7 milliseconds
I0206 18:16:19.370573 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0206 18:16:19.373245 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0206 18:16:19.375024 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
[restful] 2020/02/06 18:16:19 log.go:33: [restful/swagger] listing is available at https://:4443/swaggerapi
[restful] 2020/02/06 18:16:19 log.go:33: [restful/swagger] https://:4443/swaggerui/ is mapped to folder /swagger-ui/
I0206 18:16:19.421207 1 healthz.go:83] Installing healthz checkers:"ping", "poststarthook/generic-apiserver-start-informers", "healthz"
I0206 18:16:19.421641 1 serve.go:96] Serving securely on [::]:4443
I0206 18:16:19.421873 1 reflector.go:202] Starting reflector *v1.Pod (0s) from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421891 1 reflector.go:240] Listing and watching *v1.Pod from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421914 1 reflector.go:202] Starting reflector *v1.Node (0s) from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.421929 1 reflector.go:240] Listing and watching *v1.Node from github.com/kubernetes-incubator/metrics-server/vendor/k8s.io/client-go/informers/factory.go:130
I0206 18:16:19.423052 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0 200 OK in 1 milliseconds
I0206 18:16:19.424261 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0 200 OK in 2 milliseconds
I0206 18:16:19.425586 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/nodes?resourceVersion=38924&timeoutSeconds=481&watch=true 200 OK in 0 milliseconds
I0206 18:16:19.433545 1 round_trippers.go:405] GET https://10.96.0.1:443/api/v1/pods?resourceVersion=39246&timeoutSeconds=582&watch=true 200 OK in 0 milliseconds
I0206 18:16:49.388514 1 manager.go:99] Beginning cycle, collecting metrics...
I0206 18:16:49.388598 1 manager.go:95] Scraping metrics from 2 sources
I0206 18:16:49.395742 1 manager.go:120] Querying source: kubelet_summary:free5gc-virtual-machine
I0206 18:16:49.400574 1 manager.go:120] Querying source: kubelet_summary:master-node
I0206 18:16:49.413751 1 round_trippers.go:405] GET https://10.221.194.166:10250/stats/summary/ 200 OK in 13 milliseconds
I0206 18:16:49.414317 1 round_trippers.go:405] GET https://10.221.195.58:10250/stats/summary/ 200 OK in 18 milliseconds
I0206 18:16:49.417044 1 manager.go:150] ScrapeMetrics: time: 28.428677ms, nodes: 2, pods: 13
I0206 18:16:49.417062 1 manager.go:115] ...Storing metrics...
I0206 18:16:49.417083 1 manager.go:126] ...Cycle complete
使用 v=10 的日志输出,我甚至可以看到每个 pod 的运行状况详细信息,但在运行kubectl get hpa
或kubectl top nodes
时什么也看不到。有人可以给我一个提示吗?此外,我的指标清单是:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
args:
- /metrics-server
- --metric-resolution=30s
- --requestheader-allowed-names=aggregator
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls
- --v=6
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
#- --kubelet-preferred-address-types=InternalIP
ports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
imagePullPolicy: Always
volumeMounts:
- name: tmp-dir
mountPath: /tmp
nodeSelector:
beta.kubernetes.io/os: linux
kubernetes.io/arch: "amd64"
我可以看到以下内容:
free5gc@free5gc-virtual-machine:~/Desktop/metrics-server/deploy$ kubectl get apiservice v1beta1.metrics.k8s.io -o yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
creationTimestamp: "2020-02-06T18:57:28Z"
name: v1beta1.metrics.k8s.io
resourceVersion: "45583"
selfLink: /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io
uid: ca439221-b987-4c13-b0e0-8d2bb237e612
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
port: 443
version: v1beta1
versionPriority: 100
status:
conditions:
- lastTransitionTime: "2020-02-06T18:57:28Z"
message: 'failing or missing response from https://10.110.144.114:443/apis/metrics.k8s.io/v1beta1:
Get https://10.110.144.114:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.110.144.114:443:
connect: no route to host'
reason: FailedDiscoveryCheck
status: "False"
type: Available
我已经重现了您的问题(在Google Compute Engine
上(。尝试了几种方案来查找此问题的解决方法/解决方案。
我想提的第一件事是,您已经提供了ServiceAccount
和Deployment
YAML。您还需要ClusterRoleBinding
、RoleBinding
、ApiService
等。所有需要的 YAML 都可以在这个 Github 存储库中找到。
要快速部署具有所有必需配置metrics-server
,您可以使用:
$ git clone https://github.com/kubernetes-sigs/metrics-server.git
$ cd metrics-server/deploy/
$ kubectl apply -f kubernetes/
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
第二件事,我建议您检查CNI
豆荚(calico-node-d4rnh
和calico-node-hawked
(。创建了 4h11m 但Ready 0/1
。
关于从 Pod 和节点收集 CPU 和内存数据的最后一件事。
使用印花布
如果您使用的是单节点kubeadm
,它将正常工作,但是,当您在kubeadm
中使用多个节点时,这会导致一些问题。关于这一点,Github上有许多类似的线程。我在args:
尝试过使用各种标志,但没有成功。在metrics-server
日志(-v=6
(中,您将能够看到指标正在收集。在此 Github 线程中,其中一个 Github 用户发布了答案,这是此问题的解决方法。在K8s文档中也提到了hostNetwork
。
添加
hostNetwork: true
是最终metrics-server
为我工作的原因。没有它,娜达。如果没有kubelet-preferred-address-types line
,我可以查询我的主节点,但不能查询我的两个工作节点,也不能查询 pod,这显然是不希望的结果。缺乏kubelet-insecure-tls
也会导致metrics-server
安装无法操作。
spec:
hostNetwork: true
containers:
- args:
- --kubelet-insecure-tls
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP
- --v=6
image: k8s.gcr.io/metrics-server-amd64:v0.3.6
imagePullPolicy: Always
如果您将使用此配置进行部署,它将起作用。
$ kubectl describe apiservice v1beta1.metrics.k8s.io
Name: v1beta1.metrics.k8s.io
...
Status:
Conditions:
Last Transition Time: 2020-02-20T09:37:59Z
Message: all checks passed
Reason: Passed
Status: True
Type: Available
Events: <none>
此外,当您检查iptables
时,您可以看到使用host network: true
时的区别。与没有此配置的部署相比,有更多的条目。
之后,您可以编辑部署,并删除或评论host network: true
。
$ kubectl edit deploy metrics-server -n kube-system
deployment.apps/metrics-server edited
$ kubectl top pods
NAME CPU(cores) MEMORY(bytes)
nginx-6db489d4b7-2qhzw 0m 3Mi
nginx-6db489d4b7-9fvrj 0m 2Mi
nginx-6db489d4b7-dgbf9 0m 2Mi
nginx-6db489d4b7-dvcz5 0m 2Mi
此外,您将能够使用以下方法查找指标:
$ kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes
为了获得更好的可见性,您还可以使用jq
.
$ kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods | jq .
使用编织网
当您使用编织网而不是印花布时,它将在不设置host network
的情况下工作。
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d 'n')"
但是,您将需要使用certificates
.但是,如果您不关心安全性,则可以像前面的示例一样使用--kubelet-insecure-tls
,当使用Calico
时。