Docker-compose to K8s:如何解决"与 Pod 的节点亲和力不匹配"的问题?



这是我在Hyperledger Fabric上问的问题的延续-从Docker swarm迁移到Kubernetes是否可能?

在我的docker compose文件上运行kompose convert之后,我获得的文件与我接受的答案中列出的完全相同。然后我按顺序运行以下命令:

$ kubectl apply -f dev-orderer1-pod.yaml
$ kubectl apply -f dev-orderer1-service.yaml
$ kubectl apply -f dev-peer1-pod.yaml
$ kubectl apply -f dev-peer1-service.yaml
$ kubectl apply -f dev-couchdb1-pod.yaml
$ kubectl apply -f dev-couchdb1-service.yaml
$ kubectl apply -f ar2bc-networkpolicy.yaml

当我试图查看我的吊舱时,我会看到:

$ kubectl get pod
NAME           READY   STATUS    RESTARTS   AGE
dev-couchdb1   0/1     Pending   0          7m20s
dev-orderer1   0/1     Pending   0          8m25s
dev-peer1      0/1     Pending   0          7m39s

当我试图描述三个吊舱中的任何一个时,我看到的是:

$ kubectl describe pod dev-orderer1
Name:         dev-orderer1
Namespace:    default
Priority:     0
Node:         <none>
Labels:       io.kompose.network/ar2bc=true
io.kompose.service=dev-orderer1
Annotations:  kompose.cmd: kompose convert -f docker-compose-orderer1.yaml -f docker-compose-peer1.yaml --volumes hostPath
kompose.version: 1.22.0 (955b78124)
Status:       Pending
IP:
IPs:          <none>
Containers:
dev-orderer1:
Image:      hyperledger/fabric-orderer:latest
Port:       7050/TCP
Host Port:  0/TCP
Args:
orderer
Environment:
ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE:  /var/hyperledger/orderer/tls/server.crt
ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY:   /var/hyperledger/orderer/tls/server.key
ORDERER_GENERAL_CLUSTER_ROOTCAS:            [/var/hyperledger/orderer/tls/ca.crt]
ORDERER_GENERAL_GENESISFILE:                /var/hyperledger/orderer/orderer.genesis.block
ORDERER_GENERAL_GENESISMETHOD:              file
ORDERER_GENERAL_LISTENADDRESS:              0.0.0.0
ORDERER_GENERAL_LOCALMSPDIR:                /var/hyperledger/orderer/msp
ORDERER_GENERAL_LOCALMSPID:                 OrdererMSP
ORDERER_GENERAL_LOGLEVEL:                   INFO
ORDERER_GENERAL_TLS_CERTIFICATE:            /var/hyperledger/orderer/tls/server.crt
ORDERER_GENERAL_TLS_ENABLED:                true
ORDERER_GENERAL_TLS_PRIVATEKEY:             /var/hyperledger/orderer/tls/server.key
ORDERER_GENERAL_TLS_ROOTCAS:                [/var/hyperledger/orderer/tls/ca.crt]
Mounts:
/var/hyperledger/orderer/msp from dev-orderer1-hostpath1 (rw)
/var/hyperledger/orderer/orderer.genesis.block from dev-orderer1-hostpath0 (rw)
/var/hyperledger/orderer/tls from dev-orderer1-hostpath2 (rw)
/var/hyperledger/production/orderer from orderer1 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-44lfq (ro)
Conditions:
Type           Status
PodScheduled   False
Volumes:
dev-orderer1-hostpath0:
Type:          HostPath (bare host directory volume)
Path:          /home/isprintsg/hlf/channel-artifacts/genesis.block
HostPathType:
dev-orderer1-hostpath1:
Type:          HostPath (bare host directory volume)
Path:          /home/isprintsg/hlf/crypto-config/ordererOrganizations/ar2dev.accessreal.com/orderers/orderer1.ar2dev.accessreal.com/msp
HostPathType:
dev-orderer1-hostpath2:
Type:          HostPath (bare host directory volume)
Path:          /home/isprintsg/hlf/crypto-config/ordererOrganizations/ar2dev.accessreal.com/orderers/orderer1.ar2dev.accessreal.com/tls
HostPathType:
orderer1:
Type:          HostPath (bare host directory volume)
Path:          /home/isprintsg/hlf
HostPathType:
default-token-44lfq:
Type:        Secret (a volume populated by a Secret)
SecretName:  default-token-44lfq
Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/hostname=isprintdev
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  51s (x27 over 27m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.

末尾的错误消息对所有三个pod都是通用的。我试着用谷歌搜索这条消息,但令人惊讶的是,我没有得到任何直接的结果。这个信息是什么意思?我应该如何解决这个问题?如果你想知道的话,我对Kubernetes还很陌生。


编辑

  • dev-orderer1-pod.yaml-https://pastebin.com/PQUnz3Q2
  • dev-orderer1-service.yaml-https://pastebin.com/gxuHNvAX
  • dev-peer1-pod.yaml-https://pastebin.com/hwUQdq5L
  • dev-peer1-service.yaml-https://pastebin.com/n2Q8uMFB
  • dev-couchdb1-pod.yaml-https://pastebin.com/HTC3TQPz
  • dev-couchdb1-service.yaml-https://pastebin.com/Sg6ZkrHz
  • ar2bc-networkpolicy.yaml-https://pastebin.com/fjEdAGJe

我在研究一个并行问题时偶然发现了这个问题。。如果有帮助的话,我想这就是你的问题:

Node-Selectors:  kubernetes.io/hostname=isprintdev

节点选择器告诉Kubernetes只在主机名为isprintdev的节点上调度这些pod:(

D