CircleCI 消息"错误:执行插件:无效的 apiVersion "client.authentication.k8s.io/v1alpha1"



我在 CircleCI 中部署部署时遇到错误。请在下面找到配置文件。

运行 kubectl CLI 时,我们在 kubectl 和aws-cli的 EKS 工具之间出现错误。

version: 2.1
orbs:
aws-ecr: circleci/aws-ecr@6.3.0
docker: circleci/docker@0.5.18
rollbar: rollbar/deploy@1.0.1
kubernetes: circleci/kubernetes@1.3.0
deploy:
version: 2.1
orbs:
aws-eks: circleci/aws-eks@1.0.0
kubernetes: circleci/kubernetes@1.3.0
executors:
default:
description: |
The version of the circleci/buildpack-deps Docker container to use
when running commands.
parameters:
buildpack-tag:
type: string
default: buster
docker:
- image: circleci/buildpack-deps:<<parameters.buildpack-tag>>
description: |
A collection of tools to deploy changes to AWS EKS in a declarative
manner where all changes to templates are checked into version control
before applying them to an EKS cluster.
commands:
setup:
description: |
Install the gettext-base package into the executor to be able to run
envsubst for replacing values in template files.
This command is a prerequisite for all other commands and should not
have to be run manually.
parameters:
cluster-name:
default: ''
description: Name of the EKS Cluster.
type: string
aws-region:
default: 'eu-central-1'
description: Region where the EKS Cluster is located.
type: string
git-user-email:
default: "deploy@mail.com"
description: Email of the git user to use for making commits
type: string
git-user-name:
default: "CircleCI Deploy Orb"
description:  Name of the git user to use for making commits
type: string
steps:
- run:
name: install gettext-base
command: |
if which envsubst > /dev/null; then
echo "envsubst is already installed"
exit 0
fi
sudo apt-get update
sudo apt-get install -y gettext-base
- run:
name: Setup GitHub access
command: |
mkdir -p ~/.ssh
echo 'github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==' >> ~/.ssh/known_hosts
git config --global user.email "<< parameters.git-user-email >>"
git config --global user.name "<< parameters.git-user-name >>"
- aws-eks/update-kubeconfig-with-authenticator:
aws-region: << parameters.aws-region >>
cluster-name: << parameters.cluster-name >>
install-kubectl: true
authenticator-release-tag: v0.5.1
update-image:
description: |
Generates template files with the specified version tag for the image
to be updated and subsequently applies that template after checking it
back into version control.
parameters:
cluster-name:
default: ''
description: Name of the EKS Cluster.
type: string
aws-region:
default: 'eu-central-1'
description: Region where the EKS Cluster is located.
type: string
image-tag:
default: ''
description: |
The tag of the image, defaults to the  value of `CIRCLE_SHA1`
if not provided.
type: string
replicas:
default: 3
description: |
The replica count for the deployment.
type: integer
environment:
default: 'production'
description: |
The environment/stage where the template will be applied. Defaults
to `production`.
type: string
template-file-path:
default: ''
description: |
The path to the source template which contains the placeholders
for the image-tag.
type: string
resource-name:
default: ''
description: |
Resource name in the format TYPE/NAME e.g. deployment/nginx.
type: string
template-repository:
default: ''
description: |
The fullpath to the repository where templates reside. Write
access is required to commit generated templates.
type: string
template-folder:
default: 'templates'
description: |
The name of the folder where the template-repository is cloned to.
type: string
placeholder-name:
default: IMAGE_TAG
description: |
The name of the placeholder environment variable that is to be
substituted with the image-tag parameter.
type: string
cluster-namespace:
default: sayway
description: |
Namespace within the EKS Cluster.
type: string
steps:
- setup:
aws-region: << parameters.aws-region >>
cluster-name: << parameters.cluster-name >>
git-user-email: dev@sayway.com
git-user-name: deploy
- run:
name: pull template repository
command: |
[ "$(ls -A << parameters.template-folder >>)" ] && 
cd << parameters.template-folder >> && git pull --force && cd ..
[ "$(ls -A << parameters.template-folder >>)" ] || 
git clone << parameters.template-repository >> << parameters.template-folder >>
- run:
name: generate and commit template files
command: |
cd << parameters.template-folder >>
IMAGE_TAG="<< parameters.image-tag >>"
./bin/generate.sh --file << parameters.template-file-path >> 
--stage << parameters.environment >> 
--commit-message "Update << parameters.template-file-path >> for << parameters.environment >> with tag ${IMAGE_TAG:-$CIRCLE_SHA1}" 
<< parameters.placeholder-name >>="${IMAGE_TAG:-$CIRCLE_SHA1}" 
REPLICAS=<< parameters.replicas >>
- kubernetes/create-or-update-resource:
get-rollout-status: true
namespace: << parameters.cluster-namespace >>
resource-file-path: << parameters.template-folder >>/<< parameters.environment >>/<< parameters.template-file-path >>
resource-name: << parameters.resource-name >>
jobs:
test:
working_directory: ~/say-way/core
parallelism: 1
shell: /bin/bash --login
environment:
CIRCLE_ARTIFACTS: /tmp/circleci-artifacts
CIRCLE_TEST_REPORTS: /tmp/circleci-test-results
KONFIG_CITUS__HOST: localhost
KONFIG_CITUS__USER: postgres
KONFIG_CITUS__DATABASE: sayway_test
KONFIG_CITUS__PASSWORD: ""
KONFIG_SPEC_REPORTER: true
docker:
- image: 567567013174.dkr.ecr.eu-central-1.amazonaws.com/core-ci:test-latest
aws_auth:
aws_access_key_id: $AWS_ACCESS_KEY_ID_STAGING
aws_secret_access_key: $AWS_SECRET_ACCESS_KEY_STAGING
- image: circleci/redis
- image: rabbitmq:3.7.7
- image: circleci/mongo:4.2
- image: circleci/postgres:10.5-alpine
steps:
- checkout
- run: mkdir -p $CIRCLE_ARTIFACTS $CIRCLE_TEST_REPORTS
# This is based on your 1.0 configuration file or project settings
- restore_cache:
keys:
- v1-dep-{{ checksum "Gemfile.lock" }}-
# any recent Gemfile.lock
- v1-dep-
- run:
name: install correct bundler version
command: |
export BUNDLER_VERSION="$(grep -A1 'BUNDLED WITH' Gemfile.lock | tail -n1 | tr -d ' ')"
echo "export BUNDLER_VERSION=$BUNDLER_VERSION" >> $BASH_ENV
gem install bundler --version $BUNDLER_VERSION
- run: 'bundle check --path=vendor/bundle || bundle install --path=vendor/bundle --jobs=4 --retry=3'
- run:
name: copy test.yml.sample to test.yml
command: cp config/test.yml.sample config/test.yml
- run:
name: Precompile and clean assets
command: bundle exec rake assets:precompile assets:clean
# Save dependency cache
- save_cache:
key: v1-dep-{{ checksum "Gemfile.lock" }}-{{ epoch }}
paths:
- vendor/bundle
- public/assets
- run:
name: Audit bundle for known security vulnerabilities
command: bundle exec bundle-audit check --update
- run:
name: Setup Database
command: bundle exec ruby ~/sayway/setup_test_db.rb
- run:
name: Migrate Database
command: bundle exec rake db:citus:migrate
- run:
name: Run tests
command: bundle exec rails test -f
# By default, running "rails test" won't run system tests.
- run:
name: Run system tests
command: bundle exec rails test:system
# Save test results
- store_test_results:
path: /tmp/circleci-test-results
# Save artifacts
- store_artifacts:
path: /tmp/circleci-artifacts
- store_artifacts:
path: /tmp/circleci-test-results
build-and-push-image:
working_directory: ~/say-way/
parallelism: 1
shell: /bin/bash --login
executor: aws-ecr/default
steps:
- checkout
- run:
name: Pull latest core images for cache
command: |
$(aws ecr get-login --no-include-email --region $AWS_REGION)
docker pull "${AWS_ECR_ACCOUNT_URL}/core:latest"
- docker/build:
image: core
registry: "${AWS_ECR_ACCOUNT_URL}"
tag: "latest,${CIRCLE_SHA1}"
cache_from: "${AWS_ECR_ACCOUNT_URL}/core:latest"
- aws-ecr/push-image:
repo: core
tag: "latest,${CIRCLE_SHA1}"
deploy-production:
working_directory: ~/say-way/
parallelism: 1
shell: /bin/bash --login
executor: deploy/default
steps:
- kubernetes/install-kubectl:
kubectl-version: v1.22.0
- rollbar/notify_deploy_started:
environment: report
- deploy/update-image:
resource-name: deployment/core-web
template-file-path: core-web-pod.yml
cluster-name: report
environment: report
template-repository: git@github.com:say-way/sw-k8s.git
replicas: 3
- deploy/update-image:
resource-name: deployment/core-worker
template-file-path: core-worker-pod.yml
cluster-name: report
environment: report
template-repository: git@github.com:say-way/sw-k8s.git
replicas: 4
- deploy/update-image:
resource-name: deployment/core-worker-batch
template-file-path: core-worker-batch-pod.yml
cluster-name: report
environment: report
template-repository: git@github.com:say-way/sw-k8s.git
replicas: 1
- rollbar/notify_deploy_finished:
deploy_id: "${ROLLBAR_DEPLOY_ID}"
status: succeeded
deploy-demo:
working_directory: ~/say-way/
parallelism: 1
shell: /bin/bash --login
executor: deploy/default
steps:
- kubernetes/install-kubectl:
kubectl-version: v1.22.0
- rollbar/notify_deploy_started:
environment: demo
- deploy/update-image:
resource-name: deployment/core-web
template-file-path: core-web-pod.yml
cluster-name: demo
environment: demo
template-repository: git@github.com:say-way/sw-k8s.git
replicas: 2
- deploy/update-image:
resource-name: deployment/core-worker
template-file-path: core-worker-pod.yml
cluster-name: demo
environment: demo
template-repository: git@github.com:say-way/sw-k8s.git
replicas: 1
- deploy/update-image:
resource-name: deployment/core-worker-batch
template-file-path: core-worker-batch-pod.yml
cluster-name: demo
environment: demo
template-repository: git@github.com:say-way/sw-k8s.git
replicas: 1
- rollbar/notify_deploy_finished:
deploy_id: "${ROLLBAR_DEPLOY_ID}"
status: succeeded
workflows:
version: 2.1
build-n-test:
jobs:
- test:
filters:
branches:
ignore: master
build-approve-deploy:
jobs:
- build-and-push-image:
context: Core
filters:
branches:
only: master
- approve-report-deploy:
type: approval
requires:
- build-and-push-image
- approve-demo-deploy:
type: approval
requires:
- build-and-push-image
- deploy-production:
context: Core
requires:
- approve-report-deploy
- deploy-demo:
context: Core
requires:
- approve-demo-deploy

aws-cli 中存在一个问题。它已经修复。


而言,更新 aws-cli + 更新 ~/.kube/config 有所帮助。

  1. 更新 aws-cli(按照文档进行操作)
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
  1. 更新 kube 配置
mv ~/.kube/config ~/.kube/config.bk
aws eks update-kubeconfig --region ${AWS_REGION}  --name ${EKS_CLUSTER_NAME}

我们这里有一个修复:https://github.com/aws/aws-cli/issues/6920#issuecomment-1119926885

将 aws-cli (aws cli v1) 更新到具有以下修复程序的版本:

pip3 install awscli --upgrade --user

对于 aws cli v2,请参阅以下内容.
之后,不要忘记用以下命令重写 kube-config

aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}

此命令应将 kubeapiVersion更新为v1beta1

就我而言,在 kube 配置文件中将apiVersion更改为v1beta1有所帮助:

apiVersion: client.authentication.k8s.io/v1beta1

最新版本的kubectl有一个小故障。 现在,您可以按照以下步骤解决此问题:

  1. curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl
  2. chmod +x ./kubectl
  3. sudo mv ./kubectl /usr/local/bin/kubectl
  4. sudo kubectl version

最简单的解决方案:(它出现在这里,但用了复杂的词。

打开你的kube 配置文件,用beta替换所有alpha实例。 (建议使用查找和替换的编辑器:Atom,Sublime等)。

纳米示例:

nano  ~/.kube/config

或者使用 Atom:

atom ~/.kube/config

然后,您应该搜索alpha实例并将其替换为beta并保存文件。

最新的 kubectl 和 aws-cli 存在问题: https://github.com/aws/aws-cli/issues/6920

另一种方法是更新 AWS cli。它对我有用。

其余说明来自bigLucas提供的答案。

将 aws-cli (aws cli v2) 更新到最新版本:

winget install Amazon.AWSCLI

之后,不要忘记用以下命令重写 kube-config

aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}

此命令应将 kube apiVersion 更新为 v1beta1。

我将 alpha1 值更改为 beta1 值,它在配置文件下对我有用。

我遇到了同样的解决方案问题,请按照以下设置进行操作:

  1. mv ~/.kube/config ~/.kube/config.bk备份现有配置文件

  2. 运行以下命令:

aws eks update-kubeconfig --name ${EKS_CLUSTER_NAME} --region ${REGION}
  1. 然后在任何文本编辑器中打开配置~/.kube/config文件,将v1apiVersion1更新为v1beta1,然后重试。

使用kubectl 1.21.9 为我修复了它,使用 asdf:

asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git
asdf install kubectl 1.21.9

我建议有一个包含以下内容的.tools-versions文件:

kubectl 1.21.9
  1. Open~/.kube/config
  2. 集群中搜索遇到问题的user,并将client.authentication.k8s.io/v1alpha1替换为client.authentication.k8s.io/v1beta1

尝试升级 AWS 命令行界面:

步骤

  1. curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
  2. sudo installer -pkg ./AWSCLIV2.pkg -target

您可以使用 AWS 文档中的其他方法:安装或更新最新版本的 AWS CLI

尝试更新您的awscli(AWS 命令行界面)版本。

对于Mac,它是brew upgrade awscli(自制软件)。

我遇到了同样的问题: EKS 版本 1.22 Kubectl 工作,其版本:v1.22.15-eks-fb459a0 helm版本是3.9+,当我执行helm ls -n $namespace时出现错误

Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

从这里开始:这是 HELM 版本问题。 所以我使用命令

curl -L https://git.io/get_helm.sh | bash -s -- --version v3.8.2

降级了 Helm 版本。 舵手作品

只在 kubeconfig 中为我修复了更改 -->v1alpha1 至 v1beta1

如果是 Windows,请先删除 $HOME/.kube 文件夹中的配置文件。

然后按照 bigLucas 的建议运行aws eks update-kubeconfig --name命令。

我能够通过在MacBook Pro M1芯片(自制软件)上运行来解决此问题:

brew upgrade awscli

我只是通过将 awscli 更新到 awscli-v2 来简化解决方法,但这也需要升级 Python 和 pip。它需要最低的Python 3.6和pip3。

apt install python3-pip -y && pip3 install awscli --upgrade --user

然后使用 awscli 更新集群配置

aws eks update-kubeconfig --region <regionname> --name <ClusterName>

输出

Added new context arn:aws:eks:us-east-1:XXXXXXXXXXX:cluster/mycluster to /home/dev/.kube/config

然后检查与群集的连接

dev@ip-10-100-100-6:~$ kubectl get node
NAME                             STATUS   ROLES    AGE    VERSION
ip-X-XX-XX-XXX.ec2.internal   Ready    <none>   148m   v1.21.5-eks-9017834

您可以在存在 kubectl 和 aws-cli 的主机上运行以下命令:

export KUBERNETES_EXEC_INFO='{"apiVersion":"client.authentication.k8s.io/v1beta1"}'

如果在运行 kubectl 命令时使用 'sudo',则将其导出为 root 用户。

apt install python3-pip -y
pip3 install awscli --upgrade --user

尝试不同版本的kubectl , 如果 Kubernetes 版本是 1.23,那么我们可以使用(一个接近)kubectl 版本 1.23、1.24、1.22

对我来说,将 aws-iam 身份验证器从 v0.5.5 升级到 v0.5.9 解决了这个问题

只需使用这是所需的更改:-

v1alpha1 to v1beta1

在 kube/config 上更新此

对于在本地计算机上遇到此错误但已安装所有最新 AWS CLI 和 Kubectl 的其他人: 使用pyenv可能会扰乱您的设置

例如,我为不同的项目安装了多个python版本

$ pyenv versions
system
* 3.9.15

(请注意,当前已选择3.9.15)

$  aws --version
aws-cli/1.18.137 Python/3.9.15 Linux/6.4.11-arch2-1 botocore/1.17.60
$ kubectl get pods
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

但是将环境切换回我的系统安装会将我放回 AWS CLI v2 并修复我与 EKS 集群的 kubectl 连接

$ pyenv global system
$ aws --version
aws-cli/2.13.15 Python/3.11.3 Linux/6.4.11-arch2-1 source/x86_64.endeavouros prompt/off
$ kubectl get pods
No resources found in default namespace.

相关内容

  • 没有找到相关文章

最新更新