设置pod网络失败:未处理异常杀死插件



我正在尝试在CoreOS beta(1185.1.0)上安装带有rkt容器的kubernetes 1.4。

一般来说,我家里有两台配置了etcd2 tls证书的CoreOS pc机。

我修补了coreos-kubernetes自动通用安装脚本以支持etcd2 tls证书。最新版本的工作器和控制器安装脚本发布在https://github.com/kfirufk/coreos-kubernetes-multi-node-generic-install-script

我在控制器coreos安装脚本中使用了以下环境变量(ip: 10.79.218.2,domain: coreos-2.tux-in.com)

ADVERTISE_IP=10.79.218.2
ETCD_ENDPOINTS="https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379"
K8S_VER=v1.4.1_coreos.0
HYPERKUBE_IMAGE_REPO=quay.io/coreos/hyperkube
POD_NETWORK=10.2.0.0/16
SERVICE_IP_RANGE=10.3.0.0/24
K8S_SERVICE_IP=10.3.0.1
DNS_SERVICE_IP=10.3.0.10
USE_CALICO=true
CONTAINER_RUNTIME=rkt
ETCD_CERT_FILE="/etc/ssl/etcd/etcd1.pem"
ETCD_KEY_FILE="/etc/ssl/etcd/etcd1-key.pem"
ETCD_TRUSTED_CA_FILE="/etc/ssl/etcd/ca.pem"
ETCD_CLIENT_CERT_AUTH=true
OVERWRITE_ALL_FILES=true
CONTROLLER_HOSTNAME="coreos-2.tux-in.com"
ETCD_CERT_ROOT_DIR="/etc/ssl/etcd"
ETCD_SCHEME="https"
ETCD_AUTHORITY="coreos-2.tux-in.com:2379"
IS_MASK_UPDATE_ENGINE=false

,这些是我用于工作内核安装脚本的环境变量(ip: 10.79.218.3,domain: coreos-3.tux-in.com)

ETCD_AUTHORITY=coreos-3.tux-in.com:2379
ETCD_ENDPOINTS="https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379"
CONTROLLER_ENDPOINT=https://coreos-2.tux-in.com
K8S_VER=v1.4.1_coreos.0
HYPERKUBE_IMAGE_REPO=quay.io/coreos/hyperkube
DNS_SERVICE_IP=10.3.0.10
USE_CALICO=true
CONTAINER_RUNTIME=rkt
OVERWRITE_ALL_FILES=true
ADVERTISE_IP=10.79.218.3
ETCD_CERT_FILE="/etc/ssl/etcd/etcd2.pem"
ETCD_KEY_FILE="/etc/ssl/etcd/etcd2-key.pem"
ETCD_TRUSTED_CA_FILE="/etc/ssl/etcd/ca.pem"
ETCD_SCHEME="https"
IS_MASK_UPDATE_ENGINE=false
在两台机器上安装kubernetes并正确配置kubectl之后,当我输入kubectl get nodes时,我得到:
NAME          STATUS                     AGE
10.79.218.2   Ready,SchedulingDisabled   1h
10.79.218.3   Ready                      1h

kubectl get pods --namespace=kube-system返回

NAME                                  READY     STATUS              RESTARTS   AGE
heapster-v1.2.0-3646253287-j951o      0/2       ContainerCreating   0          1d
kube-apiserver-10.79.218.2            1/1       Running             0          1d
kube-controller-manager-10.79.218.2   1/1       Running             0          1d
kube-dns-v20-u3pd0                    0/3       ContainerCreating   0          1d
kube-proxy-10.79.218.2                1/1       Running             0          1d
kube-proxy-10.79.218.3                1/1       Running             0          1d
kube-scheduler-10.79.218.2            1/1       Running             0          1d
kubernetes-dashboard-v1.4.1-ehiez     0/1       ContainerCreating   0          1d

因此heapster-v1.2.0-3646253287-j951o, kube-dns-v20-u3pd0kubernetes-dashboard-v1.4.1-ehiez被卡在ContainerCreating状态

当我在其中任何一个上运行kubectl describe时,我基本上得到相同的错误:Error syncing pod, skipping: failed to SyncPod: failed to set up pod network: Unhandled Exception killed plugin .

例如,kubectl describe pods kubernetes-dashboard-v1.4.1-ehiez --namespace kube-system返回:
Name:       kubernetes-dashboard-v1.4.1-ehiez
Namespace:  kube-system
Node:       10.79.218.3/10.79.218.3
Start Time: Mon, 17 Oct 2016 23:31:43 +0300
Labels:     k8s-app=kubernetes-dashboard
        kubernetes.io/cluster-service=true
        version=v1.4.1
Status:     Pending
IP:
Controllers:    ReplicationController/kubernetes-dashboard-v1.4.1
Containers:
  kubernetes-dashboard:
    Container ID:
    Image:      gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.1
    Image ID:
    Port:       9090/TCP
    Limits:
      cpu:  100m
      memory:   50Mi
    Requests:
      cpu:      100m
      memory:       50Mi
    State:      Waiting
      Reason:       ContainerCreating
    Ready:      False
    Restart Count:  0
    Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-svbiv (ro)
    Environment Variables:  <none>
Conditions:
  Type      Status
  Initialized   True
  Ready     False
  PodScheduled  True
Volumes:
  default-token-svbiv:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-svbiv
QoS Class:  Guaranteed
Tolerations:    CriticalAddonsOnly=:Exists
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath   Type        Reason      Message
  --------- --------    -----   ----            -------------   --------    ------      -------
  1d        25s     9350    {kubelet 10.79.218.3}           Warning     FailedSync  Error syncing pod, skipping: failed to SyncPod: failed to set up pod network: Unhandled Exception killed plugin

我猜豆荚网络不工作是因为错误的棉布配置…

所以我试图安装calicoctl rkt容器,但有问题。但这是一个不同的stackoverflow问题:)在coreos上启动calicoctl容器

所以我不能检查印花布是否正常工作。

这是控制节点的calico-network systemd服务文件:

[Unit]
Description=Calico per-host agent
Requires=network-online.target
After=network-online.target
[Service]
Slice=machine.slice
Environment=CALICO_DISABLE_FILE_LOGGING=true
Environment=HOSTNAME=10.79.218.3
Environment=IP=10.79.218.3
Environment=FELIX_FELIXHOSTNAME=10.79.218.3
Environment=CALICO_NETWORKING=true
Environment=NO_DEFAULT_POOLS=true
Environment=ETCD_ENDPOINTS=https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379
Environment=ETCD_AUTHORITY=coreos-3.tux-in.com:2379
Environment=ETCD_SCHEME=https
Environment=ETCD_CA_CERT_FILE=/etc/ssl/etcd/ca.pem
Environment=ETCD_CERT_FILE=/etc/ssl/etcd/etcd2.pem
Environment=ETCD_KEY_FILE=/etc/ssl/etcd/etcd2-key.pem
ExecStart=/usr/bin/rkt run --inherit-env --stage1-from-dir=stage1-fly.aci --volume=var-run-calico,kind=host,source=/var/run/calico --volume=modules,kind=host,source=/lib/modules,readOnly=false --mount=volume=modules,target=/lib/modules --volume=dns,kind=host,source=/etc/resolv.conf,readOnly=true --volume=etcd-tls-certs,kind=host,source=/etc/ssl/etcd,readOnly=true --mount=volume=dns,target=/etc/resolv.conf --mount=volume=etcd-tls-certs,target=/etc/ssl/etcd --mount=volume=var-run-calico,target=/var/run/calico --trust-keys-from-https quay.io/calico/node:v0.22.0
KillMode=mixed
Restart=always
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target

,是工作节点的calico-node服务文件:

[Unit]
Description=Calico per-host agent
Requires=network-online.target
After=network-online.target
[Service]
Slice=machine.slice
Environment=CALICO_DISABLE_FILE_LOGGING=true
Environment=HOSTNAME=10.79.218.2
Environment=IP=10.79.218.2
Environment=FELIX_FELIXHOSTNAME=10.79.218.2
Environment=CALICO_NETWORKING=true
Environment=NO_DEFAULT_POOLS=false
Environment=ETCD_ENDPOINTS=https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379
ExecStart=/usr/bin/rkt run --inherit-env --stage1-from-dir=stage1-fly.aci --volume=var-run-calico,kind=host,source=/var/run/calico --volume=modules,kind=host,source=/lib/modules,readOnly=false --mount=volume=modules,target=/lib/modules --volume=dns,kind=host,source=/etc/resolv.conf,readOnly=true --volume=etcd-tls-certs,kind=host,source=/etc/ssl/etcd,readOnly=true --mount=volume=dns,target=/etc/resolv.conf --mount=volume=etcd-tls-certs,target=/etc/ssl/etcd --mount=volume=var-run-calico,target=/var/run/calico --trust-keys-from-https quay.io/calico/node:v0.22.0
KillMode=mixed
Environment=ETCD_CA_CERT_FILE=/etc/ssl/etcd/ca.pem
Environment=ETCD_CERT_FILE=/etc/ssl/etcd/etcd1.pem
Environment=ETCD_KEY_FILE=/etc/ssl/etcd/etcd1-key.pem
Restart=always
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target

,这是控制节点/etc/kubernetes/cni/net.d/10-calico.conf的内容:

{
    "name": "calico",
    "type": "flannel",
    "delegate": {
        "type": "calico",
        "etcd_endpoints": "https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379",
"etcd_key_file": "/etc/ssl/etcd/etcd1-key.pem",
"etcd_cert_file": "/etc/ssl/etcd/etcd1.pem",
"etcd_ca_cert_file": "/etc/ssl/etcd/ca.pem",
        "log_level": "none",
        "log_level_stderr": "info",
        "hostname": "10.79.218.2",
        "policy": {
            "type": "k8s",
            "k8s_api_root": "http://127.0.0.1:8080/api/v1/"
        }
    }
}

,这是工作节点的/etc/kubernetes/cni/net.d/10-calico.conf:

{
    "name": "calico",
    "type": "flannel",
    "delegate": {
        "type": "calico",
        "etcd_endpoints": "https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379",
"etcd_key_file": "/etc/ssl/etcd/etcd2-key.pem",
"etcd_cert_file": "/etc/ssl/etcd/etcd2.pem",
"etcd_ca_cert_file": "/etc/ssl/etcd/ca.pem",
        "log_level": "debug",
        "log_level_stderr": "info",
        "hostname": "10.79.218.3",
        "policy": {
            "type": "k8s",
            "k8s_api_root": "https://coreos-2.tux-in.com:443/api/v1/",
            "k8s_client_key": "/etc/kubernetes/ssl/worker-key.pem",
            "k8s_client_certificate": "/etc/kubernetes/ssl/worker.pem"
        }
    }
}

现在知道如何进一步调查这个问题。我知道,自从新的calico-cni被移走,它不再在日志文件中存储日志信息,所以我从这里丢失了。如能提供有关此问题的任何信息,我将不胜感激。

谢谢!

Calico CNI插件正在生成"Unhandled Exception Killed plugin"错误信息。根据我的经验,这意味着不太可能是calico-node.service出现问题导致该错误。

因此,可能是您的CNI网络配置有微妙的问题。你能分享一下那个文件吗?

CNI插件也应该发出更详细的日志信息- stderr/var/log/calico/cni/calico.log基于它在你的CNI网络配置中的配置。我想这个文件会给你更多的线索,让你知道到底是哪里出了问题。

说了这么多,"Unhandled Exception"错误来自Python版本的CNI插件,在这一点上是相当旧的。我建议从这里升级到最新的稳定版本:https://github.com/projectcalico/calico-cni/releases

相关内容

  • 没有找到相关文章

最新更新