Linkerd 和 k8s 不起作用



我试图在 kubernetes 中了解 linkerd。我正在我的本地minikube中使用他们网站上的链接器deamonset示例

它全部部署在 production 命名空间中。当我尝试

http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs

什么也没发生。我的设置哪里出了问题?

我的林克德亚姆:

# runs linkerd in a daemonset, in linker-to-linker mode
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
data:
  config.yaml: |-
    admin:
      port: 9990
    namers:
    - kind: io.l5d.k8s
      experimental: true
      host: localhost
      port: 8001
    telemetry:
    - kind: io.l5d.prometheus
    - kind: io.l5d.recentRequests
      sampleRate: 0.25
    usage:
      orgId: linkerd-examples-daemonset
    routers:
    - protocol: http
      label: outgoing
      dtab: |
        /srv        => /#/io.l5d.k8s/production/http;
        /host       => /srv;
        /svc        => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.daemonset
          namespace: production
          port: incoming
          service: l5d
      servers:
      - port: 4140
        ip: 0.0.0.0
      responseClassifier:
        kind: io.l5d.retryableRead5XX
    - protocol: http
      label: incoming
      dtab: |
        /srv        => /#/io.l5d.k8s/production/http;
        /host       => /srv;
        /svc        => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
      servers:
      - port: 4141
        ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:
      - name: l5d
        image: buoyantio/linkerd:0.9.1
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: outgoing
          containerPort: 4140
          hostPort: 4140
        - name: incoming
          containerPort: 4141
        - name: admin
          containerPort: 9990
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true
      - name: kubectl
        image: buoyantio/kubectl:v1.4.0
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
spec:
  selector:
    app: l5d
  type: LoadBalancer
  ports:
  - name: outgoing
    port: 4140
  - name: incoming
    port: 4141
  - name: admin
    port: 9990

这是我对 apiservice 的部署:

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: apiserver-production
spec:
  replicas: 1
  template:
    metadata:
      name: apiserver
      labels:
        app: apiserver
        role: gateway
        env: production
    spec:
      dnsPolicy: ClusterFirst
      containers:
      - name: apiserver
        image: eu.gcr.io/xxxxx/apiservice:latest
        env:
        - name: MONGO_HOST
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: host
        - name: MONGO_PORT
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: port
        - name: MONGO_USR
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: username
        - name: MONGO_PWD
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: password
        - name: MONGO_DB
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: db
        - name: MONGO_PREFIX
          valueFrom:
            secretKeyRef:
              name: mongosecret
              key: prefix
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: http_proxy
          value: $(NODE_NAME):4140
        resources:
          limits:
            memory: "300Mi"
            cpu: "50m"
        imagePullPolicy: Always
        command:
        - "pm2-docker"
        - "processes.json"
        ports:
        - name: apiserver
          containerPort: 8080
      - name: kubectl
        image: buoyantio/kubectl:1.2.3
        args:
        - proxy
        - "-p"
        - "8001"

以下是服务:

kind: Service
apiVersion: v1
metadata:
  name: apiserver
spec:
  selector:
    app: apiserver
    role: gateway
  type: LoadBalancer
  ports:
  - name: http
    port: 8080
  - name: external
    port: 80
    targetPort: 8080

在我的节点应用程序中,我正在使用global tunnel

const server = app.listen(port);
server.on('listening', function(){
  // make sure all traffic goes over linkerd
  globalTunnel.initialize({
    host: 'localhost',
    port: 4140
  });
 console.log(`Feathers application started on ${app.get('host')}:${app.get('port')} `);

您的curl命令在哪里运行?

http_proxy=$(kubectl --namespace=production get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://apiserver/readinezs`

示例中的链接器服务不会公开公共 IP 地址。 你可以用kubectl get svc/l5d确认这一点——我希望你不会看到外部IP。

我认为您需要修改服务定义---或创建一个公开ClusterIP ---的附加显式外部服务,以便接收入口流量。

部署两个相同的节点应用程序并使它们相互发送请求是有效的。奇怪的是,这些请求没有显示在链接器仪表板中。

相关内容

  • 没有找到相关文章

最新更新