kubernetes Istio Envoy网关连接到上游,但获得404响应

gajydyqb  于 12个月前  发布在  Kubernetes
关注(0)|答案(1)|浏览(128)

尝试将我的头围绕Istio和服务网格。我有一个工作集群设置与nginx ingress和使用TLS的cert-manager。我切换到Istio和网关/虚拟服务设置,据我所知,一切都连接,但当我试图访问该网站时,它返回一个空白屏幕(网络选项卡上的404响应),当我 curl 我看到一个404。这是相同的尝试直接或指定443端口。不知道如何调试,Istio's docs只提到了404,多个网关有相同的TLS证书,但我现在只有1个网关。而且,网关和虚拟服务在同一个命名空间,在虚拟服务中,后端- /API的路由设置在前端- /之前。
这里是我得到的唯一的错误响应,这是从curl与选项,做一个普通的curl返回什么都没有,甚至没有一个403。在GKE控制台上,所有的工作负载是好的,没有错误的日志。

curl -X OPTIONS https://app.example.net -I
HTTP/2 404 
date: Wed, 29 Nov 2023 20:18:13 GMT
server: istio-envoy

字符串
日志显示与上游的连接:

2023-11-19T20:48:48.798743Z info    Readiness succeeded in 1.15333632s
2023-11-19T20:48:48.799470Z info    Envoy proxy is ready
2023-11-19T21:17:44.948873Z info    xdsproxy    connected to upstream XDS server: istiod.istio-system.svc:15012
2023-11-19T21:47:40.301270Z info    xdsproxy    connected to upstream XDS server: istiod.istio-system.svc:15012
2023-11-19T22:18:07.530190Z info    xdsproxy    connected to upstream XDS server: istiod.istio-system.svc:15012
...
2023-11-20T08:48:48.028231Z info    ads XDS: Incremental Pushing ConnectedEndpoints:2 Version:
2023-11-20T08:48:48.250424Z info    cache   generated new workload certificate  latency=221.620042ms ttl=23h59m59.749615036s
2023-11-20T09:17:09.369171Z info    xdsproxy    connected to upstream XDS server: istiod.istio-system.svc:15012
2023-11-20T09:46:07.080923Z info    xdsproxy    connected to upstream XDS server: istiod.istio-system.svc:15012
...


Mesh显示网关、前端、后端的连接Sidecar:

$ istioctl proxy-status
NAME                                           CLUSTER        CDS        LDS        EDS        RDS        ECDS         ISTIOD                      VERSION
backend-deploy-67486897bb-fjv5g.demoapp        Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED     NOT SENT     istiod-64c94c5d78-5879x     1.19.3
demoapp-gtw-istio-674b96dcdb-mfsfg.demoapp     Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED     NOT SENT     istiod-64c94c5d78-5879x     1.19.3
frontend-deploy-6f6b4984b5-lnq4p.demoapp       Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED     NOT SENT     istiod-64c94c5d78-5879x     1.19.3


网关

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: demoapp-gtw
  namespace: demoapp
  annotations:
    cert-manager.io/issuer: "letsencrypt-prod"
spec:
  selector:
    istio: ingressgateway
  servers: 
  - port: 
      name: http
      number: 80
      protocol: HTTP
    hosts: [app.example.net]
    tls:
      httpsRedirect: true
  - port:
      name: https
      number: 443
      protocol: HTTPS
    hosts: [app.example.net]
    tls:
      mode: SIMPLE
      credentialName: demoapp-tls


虚拟服务

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: vert-serv-from-gw
spec:
  hosts: [ app.example.net ]
  gateways: 
  - "demoapp/demoapp-gtw"
  - mesh
  http:
  - match:
    - uri:
        prefix: /api
    route:
    - destination:
        host: backend-svc
        port:
          number: 5000
    corsPolicy:
      allowOrigins:
      - exact: https://app.octodemo.net
      allowMethods:
      - PUT
      - GET
      - POST
      - PATCH
      - OPTIONS
      - DELETE
      allowHeaders:
      - DNT
      - X-CustomHeader
      - X-LANG
      - Keep-Alive
      - User-Agent
      - X-Requested-With
      - If-Modified-Since
      - Cache-Control
      - Content-Type
      - X-Api-Key
      - X-Device-Id
      - Access-Control-Allow-Origin
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: frontend-svc
        port:
          number: 3000


不知道如何尝试和调试这进一步没有明确的错误,如果有人有任何建议,我都洗耳恭听。谢谢

EDIT所以我想我已经拨了一点正在发生的事情。在网关的路由上运行proxy-config显示:

$ istioctl pc routes demoapp-gtw-istio-674b96dcdb-mfsfg.demoapp 
NAME                                                                            VHOST NAME        DOMAINS     MATCH                  VIRTUAL SERVICE
http.80                                                                         blackhole:80      *           /*                     404
https.443.default.demoapp-gtw-istio-autogenerated-k8s-gateway-https.demoapp     blackhole:443     *           /*                     404
                                                                                backend           *           /stats/prometheus*     
                                                                                backend           *           /healthz/ready*


根据istio,我对黑洞或passthrough集群的理解是,黑洞是为了防止未经授权的进出mesh服务的流量,但默认情况下是用于passthrough或ALLOW_ANY。

$ kubectl get configmap istio -n istio-system -o yaml 
apiVersion: v1
data:
  mesh: |-
    defaultConfig:
      discoveryAddress: istiod.istio-system.svc:15012
      proxyMetadata: {}
      tracing:
        zipkin:
          address: zipkin.istio-system:9411
    defaultProviders:
      metrics:
      - prometheus
    enablePrometheusMerge: true
    rootNamespace: istio-system
    trustDomain: cluster.local
  meshNetworks: 'networks: {}'
kind: ConfigMap
metadata:
  creationTimestamp: "2023-10-26T17:45:35Z"
  labels:
    install.operator.istio.io/owning-resource: installed-state
    install.operator.istio.io/owning-resource-namespace: istio-system
    istio.io/rev: default
    operator.istio.io/component: Pilot
    operator.istio.io/managed: Reconcile
    operator.istio.io/version: 1.19.3
    release: istio
  name: istio
  namespace: istio-system
  resourceVersion: "69895477"
  uid: 3c542bc5-5f9f-4486-a37c-2c04fadba0ed


也许这是因为我的版本更新不够?

$ istioctl version
client version: 1.20.0
control plane version: 1.19.3
data plane version: 1.19.3 (3 proxies)


无论如何,我从网关到服务的路由不应该被黑洞化,因为它们是在虚拟服务中声明的,对吗?

wa7juj8i

wa7juj8i1#

嗯,我没有解决办法,但我相当肯定我已经找到了问题所在。
istio路由指向属于另一个命名空间的服务:

$ istioctl pc routes backend-deploy-7f584f9fd7-mn5z4.demoapp
NAME                                                  VHOST NAME                                                DOMAINS                                                      MATCH                  VIRTUAL SERVICE
test-frontend-svc.demotest.svc.cluster.local:3000     test-frontend-svc.demotest.svc.cluster.local:3000         *                                                            /*                     
9090                                                  kiali.istio-system.svc.cluster.local:9090                 kiali.istio-system, 10.92.12.180                             /*                     
                                                      backend                                                   *                                                            /healthz/ready*        
inbound|80||                                          inbound|http|80                                           *                                                            /*                     
inbound|80||                                          inbound|http|80                                           *                                                            /*                     
test-backend-svcs.demotest.svc.cluster.local:5000     test-backend-svcs.demotest.svc.cluster.local:5000         *                                                            /*

字符串
基于github对另一个用户问题的回答(从2019年开始)“我的理解是,这是现有解决方案的一个已知限制:为端口使用不同的名称可以解决这个问题”,我甚至更改了端口名称,使它们在每个命名空间都是唯一的,并将端口号移动了1,但它仍然指向旧端口名称上的错误服务。
以下是这些更改后更新的虚拟服务:

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: vert-serv-from-gw
spec:
  hosts: [ app.octodemo.net ]
  gateways: 
  - "demoapp/demoapp-gtw"
  - mesh
  http:
  - match:
    - uri:
        prefix: /api
    route:
    - destination:
        host: backend-svc
        port:
          number: 5001
    corsPolicy:
      allowOrigins:
      - exact: https://app.octodemo.net
      allowMethods:
      - PUT
      - GET
      - POST
      - PATCH
      - OPTIONS
      - DELETE
      allowHeaders:
      - DNT
      - X-CustomHeader
      - X-LANG
      - Keep-Alive
      - User-Agent
      - X-Requested-With
      - If-Modified-Since
      - Cache-Control
      - Content-Type
      - X-Api-Key
      - X-Device-Id
      - Access-Control-Allow-Origin
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: frontend-svc
        port:
          number: 3001


但这并不起作用,如上所示,istio继续指向错误的命名空间服务(test-backend-svcs和test-frontend-svc)。因此,在挖掘their docs时,他们对路由的声明如下:

Note for Kubernetes users: When short names are used (e.g. “reviews” instead of “reviews.default.svc.cluster.local”), Istio will interpret the short name based on the namespace of the rule, not the service. A rule in the “default” namespace containing a host “reviews will be interpreted as “reviews.default.svc.cluster.local”, irrespective of the actual namespace associated with the reviews service. To avoid potential misconfigurations, it is recommended to always use fully qualified domain names over short names.


因此,我尝试了这个方法,通过this post's approach使用服务注册表提供的长名称(backend-svc.demoapp.svc.cluster.local和frontend-svc.demoapp.svc.cluster.local),仍然得到相同的结果,只显示未配置的其他名称空间的服务。
在另一个命名空间中甚至没有网关或虚拟服务,我在那里采取的唯一步骤是为sidecar启用自动注入。(不是说他们应该已经)指向正确的服务,它仍然指向另一个名称空间中的服务在不正确的端口。我不知所措,除了转储集群和重新开始。如果任何人有任何想法,这是如何发生的,或者如果他们有类似的问题,请让我知道,因为这无助于解决问题或指向一些东西,以避免前进。

相关问题