GCP云MySQL代理的网络策略



我正试图为我的应用程序编写一些网络策略,但一旦添加策略,数据库连接就会失败。

据说MySQL代理使用端口TCP:3307和443https://cloud.google.com/sql/docs/mysql/sql-proxy#how-工作

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: {{ template "name" . }}
spec:
  podSelector:
    matchLabels:
      app: {{ template "name" . }}
  policyTypes:
  - Egress
  egress:
  # allow DNS resolution
  - ports:
    - port: 53
      protocol: UDP
    - port: 53
      protocol: TCP
    - port: 443
      protocol: TCP
    - port: 3307
      protocol: TCP

编辑:部署片段:

  - name: cloudsql-proxy
    image: gcr.io/cloudsql-docker/gce-proxy:1.28.0
    command: ["/cloud_sql_proxy",
              "-instances=company-2:europe-west3:company-mysql-1=tcp:3306",
              "-verbose=false"]
    securityContext:
      readOnlyRootFilesystem: true
      allowPrivilegeEscalation: false
      privileged: false
      runAsNonRoot: true

云MySQL文档片段:

虽然Cloud SQL Auth代理可以监听任何端口,但它会创建仅在端口上与Cloud SQL实例的传出或出口连接3307.由于Cloud SQL Auth代理通过域名sqladmin.googleapis.com调用API,该域名没有固定的IP地址必须允许端口443上的出口TCP连接。如果您的客户计算机具有出站防火墙策略,请确保它允许出站连接到云SQL实例IP上的端口3307。

编辑2:

我现在看到这个:

2022/07/22 11:12:33错误检查范围:*url.error Get"http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/scopes"拨号tcp 169.254.169.254:80:i/o超时|获取

不确定它是什么,我想允许端口80不会那么好。

编辑3:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: {{ template "name" . }}
spec:
  podSelector:
    matchLabels:
      app: {{ template "name" . }}
  policyTypes:
  - Egress
  egress:
  - ports:
    - port: 53
      protocol: UDP
    - port: 53
      protocol: TCP
    - port: 443
      protocol: TCP
    - port: 3307
      protocol: TCP
    - port: 3306
      protocol: TCP
  - to:
    - ipBlock:
        cidr: 169.254.169.254/32

我仍然会犯错,我做错了什么吗?

url.Error获取"http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/scopes

编辑4:

kubectl describe NetworkPolicy network-p-3xl2j4

Name:         network-p-3xl2j4
Namespace:    develop
Created on:   2022-07-22 14:43:04 +0200 CEST
Labels:       app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: network-p-3xl2j4
              meta.helm.sh/release-namespace: develop
Spec:
  PodSelector:     app=network-p-3xl2j4
  Not affecting ingress traffic
  Allowing egress traffic:
    To Port: 53/UDP
    To Port: 53/TCP
    To Port: 443/TCP
    To Port: 3307/TCP
    To Port: 3306/TCP
    To: <any> (traffic not restricted by destination)
    ----------
    To Port: <any> (traffic allowed to all ports)
    To:
      IPBlock:
        CIDR: 169.254.169.254/32
        Except:
  Policy Types: Egress

问题的根源在于我使用了工作负载标识。

如果将网络策略与GKE Workload Identity一起使用,则必须允许出口到以下IP地址和端口号,以便Pods可以与GKE元数据服务器进行通信。对于集群运行GKE 1.21.0-GKE.1000及更高版本,允许出口端口988上的169.254.169.252/32。对于运行1.21.0-GKE.1000之前的GKE版本的集群,允许在端口988上出口到127.0.0.1/32。到避免自动升级过程中的中断,允许所有这些IP地址和端口。

因此,这里是我的最低解决方案,包含所需的所有端口。

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: {{ template "name" . }}
spec:
  podSelector:
    matchLabels:
      app: {{ template "name" . }}
  policyTypes:
    - Egress
  egress:
    - ports:
      - port: 53
        protocol: UDP
      - port: 53
        protocol: TCP
      - port: 443
        protocol: TCP
      - port: 3307
        protocol: TCP
    - to:
      - ipBlock:
          cidr: 169.254.169.252/32
      ports:
        - protocol: TCP
          port: 988
  • 53用于DNS的TCP/UPD
  • 443用于调用sqladmin.googleapis.com云SQL文档
  • 988用于工作负载标识
  • 3307适用于云MySQL

相关内容

  • 没有找到相关文章