部署 Web UI 时"The ClusterRoleBinding " kubernetes-dashboard " is invalid: roleRef: Invalid value"



我正在尝试部署 Kubernetes Web UI,如下所述:https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

我的系统配置如下:

$ uname -a
Linux debian 4.19.0-6-amd64 #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11) x86_64 GNU/Linux
$ /usr/bin/qemu-system-x86_64 --version
QEMU emulator version 3.1.0 (Debian 1:3.1+dfsg-8+deb10u3)
Copyright (c) 2003-2018 Fabrice Bellard and the QEMU Project developers
$ minikube version
minikube version: v1.5.2
commit: 792dbf92a1de583fcee76f8791cff12e0c9440ad-dirty
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

启动 minukube 群集minikube start后,我创建了一个服务帐户和 ClusterRoleBinding,如下所述:https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

$ nano dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
$ kubectl apply -f dashboard-adminuser.yaml
$ nano dashboard-adminuser.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
$ kubectl apply -f dashboard-adminuser.yaml

现在我执行:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml

并获取以下输出:

namespace/kubernetes-dashboard configured
serviceaccount/kubernetes-dashboard configured
service/kubernetes-dashboard configured
secret/kubernetes-dashboard-certs configured
secret/kubernetes-dashboard-csrf configured
secret/kubernetes-dashboard-key-holder configured
configmap/kubernetes-dashboard-settings configured
role.rbac.authorization.k8s.io/kubernetes-dashboard configured
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard configured
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard configured
deployment.apps/kubernetes-dashboard configured
service/dashboard-metrics-scraper configured
deployment.apps/dashboard-metrics-scraper configured
The ClusterRoleBinding "kubernetes-dashboard" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"kubernetes-dashboard"}: cannot change roleRef

发生了什么以及如何解决?

错误"无法更改角色引用"是指 ClusterRoleBinding 已经存在的事实。

尝试删除现有的 ClusterRoleBindingkubernetes-dashboard

运行以下命令以删除现有:

kubectl delete clusterrolebinding kubernetes-dashboard

之后,请尝试再次安装。让我们知道这是否可以解决问题。

对我来说,它删除了现有的集群角色:

kubectl delete clusterrolebinding kubernetes-dashboard

问题是你错过了这个笔记:

注意:ClusterRoleBinding 资源的 apiVersion 可能因 Kubernetes 版本而异。

在 Kubernetes v1.8 之前,apiVersion 是 rbac.authorization.k8s.io/v1beta1 的。

这应该可以解决这个问题。

编辑1:

这个问题讨论同样的问题。 特别是这个评论。谈论

角色绑定是不可变的

这里的原因是

dashboard-adminuser.yaml设置角色引用。

您稍后应用的 yaml 文件在同一命名空间中具有角色引用。

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard

刚刚转载。

1( 创建命名空间、服务帐户和群集角色绑定

cat dashboard-adminuser.yaml 
---
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard

2(应用它

kubectl apply -f dashboard-adminuser.yaml 
namespace/kubernetes-dashboard created
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user unchanged

3( 安装仪表板

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

我也遇到了这个错误,通过minikube运行仪表板来解决:

minikube dashboard

输出:

🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
🎉  Opening http://127.0.0.1:34653/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...

运行:

Kubectl delete clusterrolebinding kubernetes-dashboard

。在 apply -f 命令之后,而不是之前。

最新更新