使用Strimzi创建的Kafka的Ingress方式进行SSL握手失败

11dmarpk  于 2023-04-19  发布在  Apache
关注(0)|答案(1)|浏览(179)

我有一个由multipass创建的本地k3 s Kubernetes集群。
我正在尝试使用基于this tutorial的Ingress方式设置Kafka,以便在Kubernetes之外运行的客户端可以访问它。
以下是我的步骤:
首先通过以下方式获取群集IP

➜ kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
west-master   Ready    control-plane,master   15m   v1.26.3+k3s1

➜ kubectl get node west-master -o wide
NAME          STATUS   ROLES                  AGE   VERSION        INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
west-master   Ready    control-plane,master   16m   v1.26.3+k3s1   192.168.205.5   <none>        Ubuntu 22.04.2 LTS   5.15.0-67-generic   containerd://1.6.19-k3s1

➜ kubectl cluster-info
Kubernetes control plane is running at https://192.168.205.5:6443
CoreDNS is running at https://192.168.205.5:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://192.168.205.5:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

这是192.168.205.5(这是我在下一步中使用的my-kafka-persistent.yaml中使用的IP地址吗?)。
然后我通过以下方式部署我的Kafka:

kubectl create namespace hm-kafka
kubectl apply --filename="https://strimzi.io/install/latest?namespace=hm-kafka" --namespace=hm-kafka
kubectl apply --filename=my-kafka-persistent.yaml --namespace=hm-kafka

my-kafka-persistent.yaml(基于kafka-persistent.yaml):

---
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: hm-kafka
spec:
  kafka:
    version: 3.4.0
    replicas: 3
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
      - name: external
        port: 9094
        type: ingress
        tls: true
        configuration:
          bootstrap:
            host: kafka-bootstrap.192.168.205.5.nip.io
          brokers:
          - broker: 0
            host: kafka-broker-0.192.168.205.5.nip.io
          - broker: 1
            host: kafka-broker-1.192.168.205.5.nip.io
          - broker: 2
            host: kafka-broker-2.192.168.205.5.nip.io
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      default.replication.factor: 3
      min.insync.replicas: 2
      inter.broker.protocol.version: "3.4"
    storage:
      type: jbod
      volumes:
        - id: 0
          type: persistent-claim
          size: 100Gi
          deleteClaim: false
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

展开后:
豆荚

服务项目

侵入

此外,对于每个Ingress,我可以在注解中看到SSL passthrough:

然后,我成功地按照教程创建了信任库。

➜ kubectl get secret hm-kafka-cluster-ca-cert \
  --namespace=hm-kafka \
  --output=jsonpath="{.data.ca\.crt}" \
  | base64 -d \
  > ca.crt

➜ keytool -importcert \
  -trustcacerts \
  -alias root \
  -file ca.crt \
  -keystore kafka-truststore.jks \
  -storepass my_passw0rd \
  -noprompt
Certificate was added to keystore

然而,当我尝试制作数据时,我遇到了这个问题:

➜ kafka-console-producer \
  --broker-list kafka-bootstrap.192.168.205.5.nip.io:443 \
  --producer-property security.protocol=SSL \
  --producer-property ssl.truststore.password=my_passw0rd \
  --producer-property ssl.truststore.location=kafka-truststore.jks \
  --topic my-topic
>[2023-04-14 15:57:06,047] ERROR [Producer clientId=console-producer] Connection to node -1 (kafka-bootstrap.192.168.205.5.nip.io/192.168.205.5:443) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2023-04-14 15:57:06,047] WARN [Producer clientId=console-producer] Bootstrap broker kafka-bootstrap.192.168.205.5.nip.io:443 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2023-04-14 15:57:06,200] ERROR [Producer clientId=console-producer] Connection to node -1 (kafka-bootstrap.192.168.205.5.nip.io/192.168.205.5:443) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2023-04-14 15:57:06,201] WARN [Producer clientId=console-producer] Bootstrap broker kafka-bootstrap.192.168.205.5.nip.io:443 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2023-04-14 15:57:06,691] ERROR [Producer clientId=console-producer] Connection to node -1 (kafka-bootstrap.192.168.205.5.nip.io/192.168.205.5:443) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2023-04-14 15:57:06,691] WARN [Producer clientId=console-producer] Bootstrap broker kafka-bootstrap.192.168.205.5.nip.io:443 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)

任何指导将不胜感激,谢谢!

更新1

谢谢@OneCricketeer指出这个问题!
由于我在macOS上使用multipass,可以提供INSTALL_K3S_EXEC="server --disable traefik",所以更新后的创建k3 s集群的命令为:

multipass launch --name=west-master --cpus=4 --memory=16g --disk=128g
multipass exec west-master -- \
  bash -c 'curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable traefik" K3S_KUBECONFIG_MODE="644" sh -'

我实际上切换到牧场主桌面,因为它也使用k3 s和容易禁用Traefik,可以设置在用户界面。

更新2

关于如何部署ingress-nginx以及如何解决我遇到的另一个问题“ingress不包含有效的IngressClass”,我在Strimzi Kafka brokers not be created because of "ingress does not contain a valid IngressClass"上发布了

k4ymrczo

k4ymrczo1#

k3 s使用traefik,而不是nginx,所以那些注解没有做任何事情......引用的博客假设您正在使用nginx
重启k3 s集群,但提供--no-deploy-traefik选项,安装nginx ingress controller
否则,您需要参考Traefik ingress文档,了解它将使用哪些匹配注解进行SSL直通。
请记住,Kafka不是一个HTTP/S服务,所以你不应该使用端口80/443与它通信。

相关问题