Kubernetes/Ubuuntu新手在这里!
我正在用一个树莓派建立一个k8s集群(希望将来能有更多(。我使用的是microk8sv1.18.8
和Ubuntu服务器20.04.1 LTS (GNU/Linux 5.4.0-1018-raspi aarch64)
。
我正在尝试访问端口80
上的一个k8s服务,但无法正确设置。我还设置了一个用于访问服务的静态IP地址,并将流量从路由器路由到服务的IP地址。
我想知道我做错了什么,或者我想做的事情是否有更好的方法!
我正在遵循的步骤:
- 我已经运行了
microk8s enable dns metallb
。我给出了DHPC服务器(192.168.0.90-192.168.0.99
(未处理的MetalLB IP地址 - 我已经通过运行
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/baremetal/deploy.yaml
安装了ingress-nginx
。这将为ingress-nginx-controller
创建一个NodePort
服务,该服务不能与MetalLB一起使用。如上所述,我通过运行kubectl edit service ingress-nginx-controller -n ingress-nginx
将服务的spec.type
从NodePort
编辑为LoadBalancer
。然后,MetalLB将IP192.168.0.90
分配给该服务 - 然后我应用以下配置文件:
apiVersion: v1
kind: Service
metadata:
name: wow-ah-api-service
namespace: develop
spec:
selector:
app: wow-ah-api
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: wow-ah-api
namespace: develop
spec:
# 3 Pods should exist at all times.
replicas: 3
selector:
matchLabels:
app: wow-ah-api
template:
metadata:
namespace: develop
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: wow-ah-api
spec:
imagePullSecrets:
- name: some-secret
containers:
- name: wow-ah-api
# Run this image
image: some-image
imagePullPolicy: Always
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: wow-ah-api-ingress
namespace: develop
spec:
backend:
serviceName: wow-ah-api-service
servicePort: 3000
以下是我看到的一些输出:
microk8s kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
develop pod/wow-ah-api-6c4bff88f9-2x48v 1/1 Running 4 4h21m
develop pod/wow-ah-api-6c4bff88f9-ccw9z 1/1 Running 4 4h21m
develop pod/wow-ah-api-6c4bff88f9-rd6lp 1/1 Running 4 4h21m
ingress-nginx pod/ingress-nginx-admission-create-mnn8g 0/1 Completed 0 4h27m
ingress-nginx pod/ingress-nginx-admission-patch-x5r6d 0/1 Completed 1 4h27m
ingress-nginx pod/ingress-nginx-controller-7896b4fbd4-nglsd 1/1 Running 4 4h27m
kube-system pod/coredns-588fd544bf-576x5 1/1 Running 4 4h26m
metallb-system pod/controller-5f98465b6b-hcj9g 1/1 Running 4 4h23m
metallb-system pod/speaker-qc9pc 1/1 Running 4 4h23m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 21h
develop service/wow-ah-api-service ClusterIP 10.152.183.88 <none> 80/TCP 4h21m
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.152.183.216 192.168.0.90 80:32151/TCP,443:30892/TCP 4h27m
ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.152.183.41 <none> 443/TCP 4h27m
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 4h26m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
metallb-system daemonset.apps/speaker 1 1 1 1 1 beta.kubernetes.io/os=linux 4h23m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
develop deployment.apps/wow-ah-api 3/3 3 3 4h21m
ingress-nginx deployment.apps/ingress-nginx-controller 1/1 1 1 4h27m
kube-system deployment.apps/coredns 1/1 1 1 4h26m
metallb-system deployment.apps/controller 1/1 1 1 4h23m
NAMESPACE NAME DESIRED CURRENT READY AGE
develop replicaset.apps/wow-ah-api-6c4bff88f9 3 3 3 4h21m
ingress-nginx replicaset.apps/ingress-nginx-controller-7896b4fbd4 1 1 1 4h27m
kube-system replicaset.apps/coredns-588fd544bf 1 1 1 4h26m
metallb-system replicaset.apps/controller-5f98465b6b 1 1 1 4h23m
NAMESPACE NAME COMPLETIONS DURATION AGE
ingress-nginx job.batch/ingress-nginx-admission-create 1/1 27s 4h27m
ingress-nginx job.batch/ingress-nginx-admission-patch 1/1 29s 4h27m
microk8s kubectl get ingress --all-namespaces
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
develop wow-ah-api-ingress <none> * 192.168.0.236 80 4h23m
我一直认为这可能与我的iptables配置有关,但我不确定如何将它们配置为与microk8s一起工作。
sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
ACCEPT all -- 10.1.0.0/16 anywhere /* generated for MicroK8s pods */
ACCEPT all -- anywhere 10.1.0.0/16 /* generated for MicroK8s pods */
ACCEPT all -- 10.1.0.0/16 anywhere
ACCEPT all -- anywhere 10.1.0.0/16
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain KUBE-EXTERNAL-SERVICES (1 references)
target prot opt source destination
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP all -- !localhost/8 localhost/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-FORWARD (1 references)
target prot opt source destination
DROP all -- anywhere anywhere ctstate INVALID
ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain KUBE-PROXY-CANARY (0 references)
target prot opt source destination
Chain KUBE-SERVICES (3 references)
target prot opt source destination
更新#1
metallb ConfigMap
(microk8s kubectl edit ConfigMap/config -n metallb-system
(
apiVersion: v1
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.0.90-192.168.0.99
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"config":"address-pools:n- name: defaultn protocol: layer2n addresses:n - 192.168.0.90-192.168.0.99n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"config","namespace":"metallb-system"}}
creationTimestamp: "2020-09-19T21:18:45Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:config: {}
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
manager: kubectl
operation: Update
time: "2020-09-19T21:18:45Z"
name: config
namespace: metallb-system
resourceVersion: "133422"
selfLink: /api/v1/namespaces/metallb-system/configmaps/config
uid: 774f6a73-b1e1-4e26-ba73-ef71bc2e1060
如果你能帮我,我将不胜感激!
简短回答:
- 您只需要(并且可能有(一个IP地址。您必须可以从Microk8s机器外部ping它
- 这是错误。删除此步骤
示例的长答案:
清洁Microk8s。只有一个公共IP(或本地机器IP。在您的用例中,我将使用192.168.0.90(。
你是如何测试的?例如
curl -H "Host: blue.nginx.example.com" http://PUBLIC_IP
来自机器外部。
运行测试。它一定会失败。
启用microk8s dns和入口
microk8s.enable dns ingress
运行测试。失败?
如果是相同的错误,则:您需要metalb
使用互联网公共IP
microk8s.enable metalb:$(curl-ipinfo.io/ip(-$(curl ipinfo.io/ip(
带有LAN IP 192.168.0.90
microk8s.enable metalb:192.168.0.90-192.168.0.90
再次运行测试
如果Test NOT返回503或404,则:您不能执行下一步操作。也许您有网络问题或防火墙过滤器。
Ingress层
我们的测试到达Microk8s Ingress控制器。他不知道该怎么办,并返回404错误(有时为503(。
没关系。下一个!
我将使用中的一个示例https://youtu.be/A_PjjCM1eLA?t=98416:24
[Kube32]在kubernetes裸机集群上设置Traefik入口
设置kubectl别名
alias kubectl=microk8s.kubectl
部署应用程序kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/nginx-deploy-main.yaml
kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/nginx-deploy-blue.yaml
kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/nginx-deploy-green.yaml
显示内部群集网络中的应用程序。默认情况下为ClusterIP。kubectl expose deploy nginx-deploy-main --port 80
kubectl expose deploy nginx-deploy-blue --port 80
kubectl expose deploy nginx-deploy-green --port 80
运行测试。它不起作用。。。然而
入口规则示例:如何按主机名传递配置主机nginx.example.com、blue.nginx.example.com和green.nginx.sample.com,并将请求分发到公开的部署:
kubectl create -f https://raw.githubusercontent.com/justmeandopensource/kubernetes/master/yamls/ingress-demo/ingress-resource-2.yaml
运行此测试:
curl -H "Host: blue.nginx.example.com" http://PUBLIC_IP
现在您将得到类似的响应
<h1>I am <font color=blue>BLUE</font></h1>
你可以玩
curl -H "Host: nginx.example.com" http://PUBLIC_IP
curl -H "Host: blue.nginx.example.com" http://PUBLIC_IP
curl -H "Host: green.nginx.example.com" http://PUBLIC_IP
结论:
- 我们只有一个IP地址和多个主机
- 我们有三种不同的服务使用相同的端口
- 请求分发是使用Ingress完成的
刚刚开始使用MicroK8s,它似乎有很大的前景。在浏览了信息网站和文档之后;能够使用Traefik入口控制器(具有自定义资源定义和入口路由(实现裸金属演示;Linkerd服务网格;和金属负载平衡器。这是在运行Ubuntu 20.04的VirtualBox Guest虚拟机上完成的;该github链接还包括";"方式";以暴露由客户VM外部的metalb提供的Traefik入口控制器外部IP。看见https://github.com/msb1/microk8s-traefik-linkerd-whoami。
与Youtube链接中显示的相比,更喜欢此实现,因为它包括工作服务网格,并使用Ingress的自定义资源定义(这是Traefik独有的,也是与其他Ingress控制器相比继续使用Traefik的原因之一(。
希望这能帮助其他人-应该能够在这个演示之后用MicroK8构建出色的部署(这是当前的重点(。