我们可以设置一个k8s裸机服务器来运行绑定DNS服务器(名为)并在端口53上从外部访问它吗



我已经使用kubespray和默认设置(kube_proxy_mode:iptables以及dns_mode:corednsIND dns服务器来管理几个域名

我用helm 3部署了一个helloworld网络应用程序进行测试。一切都像一个魅力(HTTP,HTTPs,Let's Encrypt think-cert manager(。

kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:24:46Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
k8smaster   Ready    master   22d   v1.16.7
k8sslave    Ready    <none>   21d   v1.16.7

我使用Helm 3图表在默认命名空间中部署了我的BIND DNS服务器(已命名(的映像;其中服务暴露绑定应用程序容器的端口53。

我已经用pod和绑定服务测试了DNS解析;它运行良好。以下是来自主节点的绑定k8s服务的测试:

kubectl -n default get svc bind -o wide
NAME   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE    SELECTOR
bind   ClusterIP   10.233.31.255   <none>        53/TCP,53/UDP   4m5s   app=bind,release=bind
kubectl get endpoints bind
NAME   ENDPOINTS                                                        AGE
bind   10.233.75.239:53,10.233.93.245:53,10.233.75.239:53 + 1 more...   4m12s
export SERVICE_IP=`kubectl get services bind -o go-template='{{.spec.clusterIP}}{{"n"}}'`
nslookup www.example.com ${SERVICE_IP}
Server:     10.233.31.255
Address:    10.233.31.255#53
Name:   www.example.com
Address: 176.31.XXX.XXX

因此,绑定DNS应用程序已部署,并且通过绑定k8s服务运行良好

对于下一步;我跟随https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/设置Nginx入口控制器(包括configmap和服务(以处理端口53上的tcp/udp请求并将其重定向到绑定DNS应用程序的文档。

当我从外部计算机测试名称解析时,它不起作用:

nslookup www.example.com <IP of the k8s master>
;; connection timed out; no servers could be reached

我深入研究了k8s配置、日志等,在kube proxy日志中发现了一条警告消息:

ps auxw | grep kube-proxy
root     19984  0.0  0.2 141160 41848 ?        Ssl  Mar26  19:39 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=k8smaster
journalctl --since "2 days ago" | grep kube-proxy
<NOTHING RETURNED>
KUBEPROXY_FIRST_POD=`kubectl get pods -n kube-system -l k8s-app=kube-proxy -o go-template='{{range .items}}{{.metadata.name}}{{"n"}}{{end}}' | head -n 1`
kubectl logs -n kube-system ${KUBEPROXY_FIRST_POD}
I0326 22:26:03.491900       1 node.go:135] Successfully retrieved node IP: 91.121.XXX.XXX
I0326 22:26:03.491957       1 server_others.go:150] Using iptables Proxier.
I0326 22:26:03.492453       1 server.go:529] Version: v1.16.7
I0326 22:26:03.493179       1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0326 22:26:03.493647       1 config.go:131] Starting endpoints config controller
I0326 22:26:03.493663       1 config.go:313] Starting service config controller
I0326 22:26:03.493669       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0326 22:26:03.493679       1 shared_informer.go:197] Waiting for caches to sync for service config
I0326 22:26:03.593986       1 shared_informer.go:204] Caches are synced for endpoints config 
I0326 22:26:03.593992       1 shared_informer.go:204] Caches are synced for service config 
E0411 17:02:48.113935       1 proxier.go:927] can't open "externalIP for ingress-nginx/ingress-nginx:bind-udp" (91.121.XXX.XXX:53/udp), skipping this externalIP: listen udp 91.121.XXX.XXX:53: bind: address already in use
E0411 17:02:48.119378       1 proxier.go:927] can't open "externalIP for ingress-nginx/ingress-nginx:bind-tcp" (91.121.XXX.XXX:53/tcp), skipping this externalIP: listen tcp 91.121.XXX.XXX:53: bind: address already in use

然后我寻找谁已经在使用端口53…

netstat -lpnt | grep 53
tcp        0      0 0.0.0.0:5355            0.0.0.0:*               LISTEN      1682/systemd-resolv 
tcp        0      0 87.98.XXX.XXX:53        0.0.0.0:*               LISTEN      19984/kube-proxy    
tcp        0      0 169.254.25.10:53        0.0.0.0:*               LISTEN      14448/node-cache    
tcp6       0      0 :::9253                 :::*                    LISTEN      14448/node-cache    
tcp6       0      0 :::9353                 :::*                    LISTEN      14448/node-cache

查看proc 14448/节点缓存:

cat /proc/14448/cmdline 
/node-cache-localip169.254.25.10-conf/etc/coredns/Corefile-upstreamsvccoredns

因此coredns已经在处理端口53,这是正常的,因为它是k8s内部DNS服务。

在coredns文档中(https://github.com/coredns/coredns/blob/master/README.md)他们讨论了使用不同端口的-dns.port选项……但当我研究kubespray(它有3个jinja模板https://github.com/kubernetes-sigs/kubespray/tree/release-2.12/roles/kubernetes-apps/ansible/templates用于创建类似于https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns)所有东西都用53端口硬编码。

所以我的问题是:有没有一个k8s集群配置/解决方法,这样我就可以运行我自己的DNS服务器并将其暴露在端口53?

也许吧?

  • 将coredns设置为使用不同于53的端口?看起来很难,我真的不确定这是否有意义
  • 我可以设置我的绑定k8s服务来暴露端口5353,并配置nginx入口控制器来处理这个5353端口并重定向到应用程序53端口。但这需要设置iptables来将端口53上接收的外部DSN请求*路由到端口5353上的绑定k8s服务?iptables配置是什么(INPUT/PREROUTING或FORWARD(?这种网络配置会破坏coredns吗

问候,

Chris

我想你的nginx入口没有按预期工作。您需要负载平衡器提供程序,如MetalLB,到您的裸机k8s集群,以在端口(如53(上接收外部连接。您不需要nginx ingress与bind一起使用,只需将bindService类型从ClusterIP更改为LoadBalancer,并确保您在该服务上获得外部IP即可。您的舵表手册可能有助于切换到LoadBalancer

相关内容

最新更新