KUBECONFIG设置完成,但kubectl在gitlab中的远程docker runner上无法获得pods-o宽



这有点棘手,我有一个K8s集群正在运行,我能够在该集群内执行docker映像,我可以看到命令"kubectl get-pods-o wide"的内容。现在我已经用这个K8集群设置了Gitlab

我已经在Gitlab中用上面的K8集群分别设置了变量$KUBE_URL$KUBE_USER和$KUBE_PASSWORD

Gitlab runner控制台显示所有这些信息,如下面的控制台日志所示,最后失败

$ kubeconfig=cluster1-config kubectl get pods -o wide
error: the server doesn’t have a resource type “pods”
ERROR: Job failed: exit code 1

以下是完整的控制台日志:


Running with gitlab-runner 11.4.2 (cf91d5e1)    
on WotC-Docker-ip-10-102-0-70 d457d50a
Using Docker executor with image docker:latest …
Pulling docker image docker:latest …
Using docker image sha256:062267097b77e3ecf374b437e93fefe2bbb2897da989f930e4750752ddfc822a for docker:latest …
Running on runner-d457d50a-project-185-concurrent-0 via ip-10-102-0-70…
Fetching changes…
Removing cluster1-config
HEAD is now at 25846c4 Initial commit
From https://git.com/core-systems/gatling
25846c4…bcaa89b master -> origin/master
Checking out bcaa89bf as master…
Skipping Git submodules setup
$ uname -a
Linux runner-d457d50a-project-185-concurrent-0 4.14.67-66.56.amzn1.x86_64 #1 SMP Tue Sep 4 22:03:21 UTC 2018 x86_64 Linux
$ apk add --no-cache curl
fetch htt p://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch ht tp://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
(1/4) Installing nghttp2-libs (1.32.0-r0)
(2/4) Installing libssh2 (1.8.0-r3)
(3/4) Installing libcurl (7.61.1-r1)
(4/4) Installing curl (7.61.1-r1)
Executing busybox-1.28.4-r1.trigger
OK: 6 MiB in 18 packages
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s ht tps : //storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload  Upload   Total   Spent    Left  Speed
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
95 37.3M 95 35.8M 0 0 37.8M 0 --:–:-- --:–:-- --:–:-- 37.7M
100 37.3M 100 37.3M 0 0 38.3M 0 --:–:-- --:–:-- --:–:-- 38.3M
$ chmod +x ./kubectl
$ mv ./kubectl /usr/local/bin/kubectl
$ kubectl config set-cluster nosebit --server="$KUBE_URL" --insecure-skip-tls-verify=true
Cluster “nosebit” set.
$ kubectl config set-credentials admin --username="$KUBE_USER" --password="$KUBE_PASSWORD"
User “admin” set.
$ kubectl config set-context default --cluster=nosebit --user=admin
Context “default” created.
$ kubectl config use-context default
Switched to context “default”.
$ cat $HOME/.kube/config
apiVersion: v1
clusters:
cluster:
insecure-skip-tls-verify: true
server: https://18.216.8.240:443
name: nosebit    
contexts:    
context:    
cluster: nosebit    
user: admin    
name: default    
current-context: default    
kind: Config    
preferences: {}    
users:    
name: admin    
user:    
password: |-    
MIIDOzCCAiOgAwIBAgIJALOrUrxmhgpHMA0GCSqGSIb3DQEBCwUAMBgxFjAUBgNV    
BAMMDTEzLjU4LjE3OC4yNDEwHhcNMTgxMTI1MjIwNzE1WhcNMjgxMTIyMjIwNzE1    
WjAYMRYwFAYDVQQDDA0xMy41OC4xNzguMjQxMIIBIjANBgkqhkiG9w0BAQEFAAOC    
AQ8AMIIBCgKCAQEA4jmyesjEiy6T2meCdnzzLfSE1VtbY//0MprL9Iwsksa4xssf    
PXrwq97I/aNNE2hWZhZkpPd0We/hNKh2rxwNjgozQTNcXqjC01ZVjfvpvwHzYDqj    
4cz6y469rbuKqmXHKsy/1docA0IdyRKS1JKWz9Iy9Wi2knjZor6/kgvzGKdH96sl    
ltwG7hNnIOrfNQ6Bzg1H6LEmFP+HyZoylWRsscAIxD8I/cmSz7YGM1L1HWqvUkRw    
GE23TXSG4uNYDkFaqX46r4nwLlQp8p7heHeCV/mGPLd0QCUaCewqSR+gFkQz4nYX    
l6BA3M0Bo4GHMIGEMB0GA1UdDgQW    
BBQqsD7FUt9vBW2LcX4xbqhcO1khuTBIBgNVHSMEQTA/gBQqsD7FUt9vBW2LcX4x    
bqhcO1khuaEcpBowGDEWMBQGA1UEAwwNMTMuNTguMTc4LjI0MYIJALOrUrxmhgpH    
MAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQDAgEGMA0GCSqGSIb3DQEBCwUAA4IBAQAY    
6mxGeQ90mXYdbLtoVxOUSvqk9+Ded1IzuoQMr0joxkDz/95HCddyTgW0gMaYsv2J    
IZVH7JQ6NkveTyd42QI29fFEkGfPaPuLZKn5Chr9QgXJ73aYrdFgluSgkqukg4rj    
rrb+V++hE9uOBtDzcssd2g+j9oNA5j3VRKa97vi3o0eq6vs++ok0l1VD4wyx7m+l    
seFx50RGXoDjIGh73Gh9Rs7/Pvc1Pj8uAGvj8B7ZpAMPEWYmkkc4F5Y/14YbtfGc    
2VlUJcs5p7CbzsqI5Tqm+S9LzZXtD1dVnsbbbGqWo32CIm36Cxz/O/FCf8tbITpr    
u2O7VjBs5Xfm3tiW811k
username: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tdzZqdDYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjFiMjc2YzIxLWYxMDAtMTFlOC04YjM3LTAyZDhiMzdkOTVhMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNQifQ.RCQQWjDCSkH8YckBeck-EIdvOnTKBmUACXVixPfUp9gAmUnit5qIPvvFnav-C-orfYt552NQ5GTLOA3yR5-jmxoYJwCJBfvPRb1GqqgiiJE2pBsu5Arm30MOi2wbt5uCNfKMAqcWiyJQF98M2PFc__jH6C1QWPXgJokyk7i8O6s3TD69KrrXNj_W4reDXourLl7HwHWoWwNKF0dgldanug-_zjvE06b6VZBI-YWpm9bpe_ArIOrMEjl0JRGerWahcQFVJsmhc4vgw-9-jUsfKPUYEfDItJdQKyV9dgdwShgzMINuuHlU7w7WBxmJT6cqMIvHRnDHuno3qMKTJTuh-g    
$ kubectl config view --minify > cluster1-config    
$ export KUBECONFIG=$HOME/.kube/config    
$ kubectl --kubeconfig=cluster1-config config get-contexts    
CURRENT NAME CLUSTER AUTHINFO NAMESPACE    
default   nosebit   admin     
$ kubeconfig=cluster1-config kubectl get pods -o wide    
error: the server doesn’t have a resource type “pods”    
ERROR: Job failed: exit code 1    
==================================================================================================

这是我的.gitlab-ci.yml内容,你能建议一下为什么即使KUBECONFIG设置成功,kubectl的pod也不显示远程集群的pod吗?

image : docker:latest    
variables:    
CONTAINER_DEV_IMAGE: https://hub.docker.com/r/tarunkumard/gatling/:$CI_COMMIT_SHA    
stages:    
deploy
deploy:    
stage: deploy    
tags:    
- docker
script:    
‘uname -a’    
‘apk add --no-cache curl’    
‘curl -LO http s://storage.go ogleapis.com/kubernetes-release/release/$(curl -s htt ps:// storage.googlea pis .com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl’    
‘chmod +x ./kubectl’    
‘mv ./kubectl /usr/local/bin/kubectl’    
‘kubectl config set-cluster nosebit --server="$KUBE_URL" --insecure-skip-tls-verify=true’    
‘kubectl config set-credentials admin --username=" " --password="$KUBE_PASSWORD"’    
‘kubectl config set-context default --cluster=nosebit --user=admin’    
‘kubectl config use-context default’    
‘cat $HOME/.kube/config’    
‘kubectl config view --minify > cluster1-config’    
‘export KUBECONFIG=$HOME/.kube/config’    
‘kubectl --kubeconfig=cluster1-config config get-contexts’    
'kubeconfig=cluster1-config kubectl get pods -o wide ’

为什么gitlab runner无法从Kubernetes集群中获取pods(注意,这个集群正在使用启动和运行,我可以使用kubectl获取pods命令看到pods(

基本上,

kubectl config view --minify > cluster1-config

不会这样做,因为输出将是这样的,没有实际的凭据/证书:

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://<kube-apiserver>:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: default
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED

您需要:

kubectl config view --raw > cluster1-config

如果这不是问题所在。可能是您的凭据没有正确的RBAC权限。我会尝试查找为该管理员用户绑定的ClusterRoleBindingRoleBinding。类似于:

$ kubectl get clusterrolebinding -o=jsonpath='{range .items[*]}{.metadata.name} {.roleRef.name} {.subjects}{"n"}{end}' | grep admin
$ kubectl get rolebinding -o=jsonpath='{range .items[*]}{.metadata.name} {.roleRef.name} {.subjects}{"n"}{end}' | grep admin

找到该角色后,您可以查看它是否具有查看pods的正确权限。例如:

$ kubectl get clusterrole cluster-admin -o=yaml

最新更新