ISTIO在指定主机且启用HTTP重定向时失败



我使用ISTIO和主机名来实现负载平衡和引导流量。我启用了以下虚拟服务:

kind: VirtualService
metadata:
name: app-lab-app
namespace: my-namespace
spec:
gateways:
- istio-system/ingressgateway
hosts:
- hostname1.lab
http:
- match:
route:
- destination:
host: search-head-service
port:
number: 8000

当我试图通过cURL访问此服务时,我收到以下错误(32271是映射到ingressgateway上端口80的主机端口(:

curl -Hhost:hostname1.lab http://10.20.1.108:32271/ -L
curl: (7) Failed to connect to hostname1.lab port 80: Connection refused

问题是。。端点执行重定向。我可以访问第一个网站,但一旦重定向,它就会失败

我可以通过删除规范中的主机名并更改为"*"来实现这一点,但这对我实现基于主机的负载平衡没有帮助。

编辑:入口网关配置(kubectl描述pod/入口网关xxxx(

Name:         istio-ingressgateway-657df8bc75-cmghw
Namespace:    istio-system
Priority:     0
Node:         ip-10-20-1-108.us-west-2.compute.internal/10.20.1.108
Start Time:   Tue, 21 Apr 2020 13:22:48 -0500
Labels:       app=istio-ingressgateway
chart=gateways
heritage=Tiller
istio=ingressgateway
pod-template-hash=657df8bc75
release=istio
service.istio.io/canonical-name=istio-ingressgateway
service.istio.io/canonical-revision=1.5
Annotations:  cni.projectcalico.org/podIP: 10.192.1.36/32
kubernetes.io/psp: 00-privileged
sidecar.istio.io/inject: false
Status:       Running
IP:           10.192.1.36
IPs:
IP:           10.192.1.36
Controlled By:  ReplicaSet/istio-ingressgateway-657df8bc75
Containers:
istio-proxy:
Container ID:  docker://bfa29df838cd1e42a24674838bbf8454c8d56ec898b1833563f1b89a19a38030
Image:         docker.io/istio/proxyv2:1.5.0
Image ID:      docker-pullable://docker.io/istio/proxyv2@sha256:89b5fe2df96920189a193dd5f7dbd776e00024e4c1fd1b59bb53867278e9645a
Ports:         15020/TCP, 80/TCP, 443/TCP, 15029/TCP, 15030/TCP, 15031/TCP, 15032/TCP, 31400/TCP, 15443/TCP, 15011/TCP, 8060/TCP, 853/TCP, 15090/TCP
Host Ports:    0/TCP, 80/TCP, 443/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
proxy
router
--domain
$(POD_NAMESPACE).svc.cluster.local
--proxyLogLevel=warning
--proxyComponentLogLevel=misc:error
--log_output_level=default:info
--drainDuration
45s
--parentShutdownDuration
1m0s
--connectTimeout
10s
--serviceCluster
istio-ingressgateway
--zipkinAddress
zipkin.istio-system:9411
--proxyAdminPort
15000
--statusPort
15020
--controlPlaneAuthPolicy
NONE
--discoveryAddress
istio-pilot.istio-system.svc:15012
--trust-domain=cluster.local
State:          Running
Started:      Tue, 21 Apr 2020 13:22:50 -0500
Ready:          True
Restart Count:  0
Limits:
cpu:     2
memory:  1Gi
Requests:
cpu:      10m
memory:   40Mi
Readiness:  http-get http://:15020/healthz/ready delay=1s timeout=1s period=2s #success=1 #failure=30
Environment:
JWT_POLICY:                   first-party-jwt
PILOT_CERT_PROVIDER:          istiod
ISTIO_META_USER_SDS:          true
CA_ADDR:                      istio-pilot.istio-system.svc:15012
NODE_NAME:                     (v1:spec.nodeName)
POD_NAME:                     istio-ingressgateway-657df8bc75-cmghw (v1:metadata.name)
POD_NAMESPACE:                istio-system (v1:metadata.namespace)
INSTANCE_IP:                   (v1:status.podIP)
HOST_IP:                       (v1:status.hostIP)
SERVICE_ACCOUNT:               (v1:spec.serviceAccountName)
ISTIO_META_WORKLOAD_NAME:     istio-ingressgateway
ISTIO_META_OWNER:             kubernetes://apis/apps/v1/namespaces/istio-system/deployments/istio-ingressgateway
ISTIO_META_MESH_ID:           cluster.local
ISTIO_AUTO_MTLS_ENABLED:      true
ISTIO_META_POD_NAME:          istio-ingressgateway-657df8bc75-cmghw (v1:metadata.name)
ISTIO_META_CONFIG_NAMESPACE:  istio-system (v1:metadata.namespace)
ISTIO_META_ROUTER_MODE:       sni-dnat
ISTIO_META_CLUSTER_ID:        Kubernetes
Mounts:
/etc/istio/ingressgateway-ca-certs from ingressgateway-ca-certs (ro)
/etc/istio/ingressgateway-certs from ingressgateway-certs (ro)
/etc/istio/pod from podinfo (rw)
/var/run/ingress_gateway from ingressgatewaysdsudspath (rw)
/var/run/secrets/istio from istiod-ca-cert (rw)
/var/run/secrets/kubernetes.io/serviceaccount from istio-ingressgateway-service-account-token-7ssdg (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
istiod-ca-cert:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      istio-ca-root-cert
Optional:  false
podinfo:
Type:  DownwardAPI (a volume populated by information about the pod)
Items:
metadata.labels -> labels
metadata.annotations -> annotations
ingressgatewaysdsudspath:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
ingressgateway-certs:
Type:        Secret (a volume populated by a Secret)
SecretName:  istio-ingressgateway-certs
Optional:    true
ingressgateway-ca-certs:
Type:        Secret (a volume populated by a Secret)
SecretName:  istio-ingressgateway-ca-certs
Optional:    true
istio-ingressgateway-service-account-token-7ssdg:
Type:        Secret (a volume populated by a Secret)
SecretName:  istio-ingressgateway-service-account-token-7ssdg
Optional:    false
QoS Class:       Burstable
Node-Selectors:  istio-ingressgateway=true
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

虽然我仍然想了解最初发生了什么,但一位ISTIO大师让我应用了下面的补丁。这些步骤在所有具有适当标签的节点上创建ISTIO网关(而不是入口网关(:

步骤1-标记某些节点:

kubectl label nodes <hostname> istio-ingressgateway=true
kubectl label nodes <hostname> istio-ingressgateway=true

步骤2-将补丁保存到类似patch.json:的文件中

"spec": {
"replicas": 2,
"template": {
"spec": {
"nodeSelector": {"istio-ingressgateway" : "true"},
"containers": [
{"name" : "istio-proxy", "ports": [{"containerPort" : 80, "hostPort" : 80, "protocol": "TCP"}, {"containerPort":443, "hostPort": 443, "protocol" : "TCP"}]}
]
}
}
}
}

步骤3-应用补丁:

kubectl -n istio-system patch deployment/istio-ingressgateway --patch "$(cat patch.json)"

最新更新