apache-kafka Kubernetes本地裸金属上的StrimziKafka

v1uwarro  于 2022-11-01  发布在  Apache
关注(0)|答案(4)|浏览(202)

我有一个kubernetes集群运行在多台本地(裸机/physcal)机器上。我想在集群上部署kafka,但是我不知道如何在我的配置中使用strimzi。
我试着按照快速入门页面上的教程进行操作:https://strimzi.io/docs/quickstart/master/
我的Zookeeper舱在2.4. Creating a cluster点等待:

Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  pod has unbound immediate PersistentVolumeClaims

我通常使用hostpath作为我的卷,我不知道这是怎么回事...

EDIT:我尝试使用Arghya Sadhu的命令创建一个StorageClass,但问题仍然存在。

我的PVC的描述:

kubectl describe -n my-kafka-project persistentvolumeclaim/data-my-cluster-zookeeper-0
Name:          data-my-cluster-zookeeper-0
Namespace:     my-kafka-project
StorageClass:  local-storage
Status:        Pending
Volume:        
Labels:        app.kubernetes.io/instance=my-cluster
               app.kubernetes.io/managed-by=strimzi-cluster-operator
               app.kubernetes.io/name=strimzi
               strimzi.io/cluster=my-cluster
               strimzi.io/kind=Kafka
               strimzi.io/name=my-cluster-zookeeper
Annotations:   strimzi.io/delete-claim: false
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    my-cluster-zookeeper-0
Events:
  Type    Reason                Age                 From                         Message
  ----    ------                ----                ----                         -------
  Normal  WaitForFirstConsumer  72s (x66 over 16m)  persistentvolume-controller  waiting for first consumer to be created before binding

还有我的吊舱:

kubectl describe -n my-kafka-project pod/my-cluster-zookeeper-0
Name:           my-cluster-zookeeper-0
Namespace:      my-kafka-project
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/instance=my-cluster
                app.kubernetes.io/managed-by=strimzi-cluster-operator
                app.kubernetes.io/name=strimzi
                controller-revision-hash=my-cluster-zookeeper-7f698cf9b5
                statefulset.kubernetes.io/pod-name=my-cluster-zookeeper-0
                strimzi.io/cluster=my-cluster
                strimzi.io/kind=Kafka
                strimzi.io/name=my-cluster-zookeeper
Annotations:    strimzi.io/cluster-ca-cert-generation: 0
                strimzi.io/generation: 0
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/my-cluster-zookeeper
Containers:
  zookeeper:
    Image:      strimzi/kafka:0.15.0-kafka-2.3.1
    Port:       <none>
    Host Port:  <none>
    Command:
      /opt/kafka/zookeeper_run.sh
    Liveness:   exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Readiness:  exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      ZOOKEEPER_NODE_COUNT:          1
      ZOOKEEPER_METRICS_ENABLED:     false
      STRIMZI_KAFKA_GC_LOG_ENABLED:  false
      KAFKA_HEAP_OPTS:               -Xms128M
      ZOOKEEPER_CONFIGURATION:       autopurge.purgeInterval=1
                                     tickTime=2000
                                     initLimit=5
                                     syncLimit=2

    Mounts:
      /opt/kafka/custom-config/ from zookeeper-metrics-and-logging (rw)
      /var/lib/zookeeper from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
  tls-sidecar:
    Image:       strimzi/kafka:0.15.0-kafka-2.3.1
    Ports:       2888/TCP, 3888/TCP, 2181/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Command:
      /opt/stunnel/zookeeper_stunnel_run.sh
    Liveness:   exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
    Readiness:  exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      ZOOKEEPER_NODE_COUNT:   1
      TLS_SIDECAR_LOG_LEVEL:  notice
    Mounts:
      /etc/tls-sidecar/cluster-ca-certs/ from cluster-ca-certs (rw)
      /etc/tls-sidecar/zookeeper-nodes/ from zookeeper-nodes (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-my-cluster-zookeeper-0
    ReadOnly:   false
  zookeeper-metrics-and-logging:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      my-cluster-zookeeper-config
    Optional:  false
  zookeeper-nodes:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-zookeeper-nodes
    Optional:    false
  cluster-ca-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-cluster-ca-cert
    Optional:    false
  my-cluster-zookeeper-token-hgk2b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-cluster-zookeeper-token-hgk2b
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
62o28rlo

62o28rlo1#

您需要具有满足PersistentVolumeClaim的约束的PersistentVolume。
使用本地存储。使用本地存储类:

$ cat <<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF | kubectl apply -f -

您需要在群集中配置一个默认storageClass,以便PersistentVolumeClaim可以从那里获取存储。

$ kubectl patch storageclass local-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
9jyewag0

9jyewag02#

我在裸机上运行时遇到了同样的问题,我尝试了@arghya-sadhu提到的存储类方法,但它仍然不起作用。我发现strimzi使用的存储类是一个特定的本地存储类型,如here所述。此外,对于每个副本,您将需要一个不同的存储类和一个具有不同目录的持久卷。
例如,下面的代码片段将为Zookeeper和Kafka创建3个副本。
您需要将“node 2”替换为要向其分配数据的节点的名称。

kubectl get nodes

然后,您需要为每个存储类创建目录,因为它们不能位于同一目录中,否则会出现错误

ssh root@node2
mkdir /mnt/pv0 /mnt/pv1 /mnt/pv2

然后,您可以运行此代码片段来创建存储类和持久卷。
“存储类和pv.yaml”

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: class-0
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  encrypted: "true"
  fsType: "xfs"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: class-1
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  encrypted: "true"
  fsType: "xfs"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: class-2
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  encrypted: "true"
  fsType: "xfs"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-0
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: class-0
  local:
    path: /mnt/pv0
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-1
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: class-0
  local:
    path: /mnt/pv0
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-2
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: class-1
  local:
    path: /mnt/pv1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-3
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: class-1
  local:
    path: /mnt/pv1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-4
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: class-2
  local:
    path: /mnt/pv2
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node2
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-5
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: class-2
  local:
    path: /mnt/pv2
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node2

一旦你创建了它,你就可以部署你的Kafka和Zookeeper集群了。
“范例-Kafka-丛集.yaml”

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-sample-cluster
spec:
  kafka:
    version: 3.2.0
    replicas: 3
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    config:
      offsets.topic.replication.factor: 1
      transaction.state.log.replication.factor: 1
      transaction.state.log.min.isr: 1
      default.replication.factor: 1
      min.insync.replicas: 1
      inter.broker.protocol.version: "3.2"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 1Gi
        deleteClaim: true
        overrides:
        - broker: 0
          class: class-0
        - broker: 1
          class: class-1
        - broker: 2
          class: class-2
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 1Gi
      deleteClaim: true
      overrides:
        - broker: 0
          class: class-0
        - broker: 1
          class: class-1
        - broker: 2
          class: class-2
  entityOperator:
    topicOperator: {}
    userOperator: {}
ni65a41a

ni65a41a3#

是的,在我看来,Kubernetes在基础架构级别缺少一些东西。您应该提供PersistentVolumes,用于静态分配到PVC,或者如Arghya所述,您可以提供StorageClasses进行动态分配。

igsr9ssn

igsr9ssn4#

在我的例子中,我在另一个名称空间my-cluster-kafka中创建了Kafka,但是strimzi操作符在名称空间kafka中。
所以我只是在相同的命名空间中创建。为了测试的目的,我使用了一个临时存储。
这里的kafla.yaml:

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    replicas: 1
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
        authentication:
          type: tls
      - name: external
        port: 9094
        type: nodeport
        tls: false
    storage:
      type: ephemeral
    config:
      offsets.topic.replication.factor: 1
      transaction.state.log.replication.factor: 1
      transaction.state.log.min.isr: 1
  zookeeper:
    replicas: 1
    storage:
      type: ephemeral
  entityOperator:
    topicOperator: {}
    userOperator: {}

相关问题