我有一个kubernetes集群运行在多台本地(裸机/physcal)机器上。我想在集群上部署kafka,但是我不知道如何在我的配置中使用strimzi。
我试着按照快速入门页面上的教程进行操作:https://strimzi.io/docs/quickstart/master/
我的Zookeeper舱在2.4. Creating a cluster
点等待:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims
我通常使用hostpath
作为我的卷,我不知道这是怎么回事...
EDIT:我尝试使用Arghya Sadhu的命令创建一个StorageClass,但问题仍然存在。
我的PVC的描述:
kubectl describe -n my-kafka-project persistentvolumeclaim/data-my-cluster-zookeeper-0
Name: data-my-cluster-zookeeper-0
Namespace: my-kafka-project
StorageClass: local-storage
Status: Pending
Volume:
Labels: app.kubernetes.io/instance=my-cluster
app.kubernetes.io/managed-by=strimzi-cluster-operator
app.kubernetes.io/name=strimzi
strimzi.io/cluster=my-cluster
strimzi.io/kind=Kafka
strimzi.io/name=my-cluster-zookeeper
Annotations: strimzi.io/delete-claim: false
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: my-cluster-zookeeper-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 72s (x66 over 16m) persistentvolume-controller waiting for first consumer to be created before binding
还有我的吊舱:
kubectl describe -n my-kafka-project pod/my-cluster-zookeeper-0
Name: my-cluster-zookeeper-0
Namespace: my-kafka-project
Priority: 0
Node: <none>
Labels: app.kubernetes.io/instance=my-cluster
app.kubernetes.io/managed-by=strimzi-cluster-operator
app.kubernetes.io/name=strimzi
controller-revision-hash=my-cluster-zookeeper-7f698cf9b5
statefulset.kubernetes.io/pod-name=my-cluster-zookeeper-0
strimzi.io/cluster=my-cluster
strimzi.io/kind=Kafka
strimzi.io/name=my-cluster-zookeeper
Annotations: strimzi.io/cluster-ca-cert-generation: 0
strimzi.io/generation: 0
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/my-cluster-zookeeper
Containers:
zookeeper:
Image: strimzi/kafka:0.15.0-kafka-2.3.1
Port: <none>
Host Port: <none>
Command:
/opt/kafka/zookeeper_run.sh
Liveness: exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
ZOOKEEPER_NODE_COUNT: 1
ZOOKEEPER_METRICS_ENABLED: false
STRIMZI_KAFKA_GC_LOG_ENABLED: false
KAFKA_HEAP_OPTS: -Xms128M
ZOOKEEPER_CONFIGURATION: autopurge.purgeInterval=1
tickTime=2000
initLimit=5
syncLimit=2
Mounts:
/opt/kafka/custom-config/ from zookeeper-metrics-and-logging (rw)
/var/lib/zookeeper from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
tls-sidecar:
Image: strimzi/kafka:0.15.0-kafka-2.3.1
Ports: 2888/TCP, 3888/TCP, 2181/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/opt/stunnel/zookeeper_stunnel_run.sh
Liveness: exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
ZOOKEEPER_NODE_COUNT: 1
TLS_SIDECAR_LOG_LEVEL: notice
Mounts:
/etc/tls-sidecar/cluster-ca-certs/ from cluster-ca-certs (rw)
/etc/tls-sidecar/zookeeper-nodes/ from zookeeper-nodes (rw)
/var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-my-cluster-zookeeper-0
ReadOnly: false
zookeeper-metrics-and-logging:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: my-cluster-zookeeper-config
Optional: false
zookeeper-nodes:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-zookeeper-nodes
Optional: false
cluster-ca-certs:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-cluster-ca-cert
Optional: false
my-cluster-zookeeper-token-hgk2b:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-zookeeper-token-hgk2b
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
4条答案
按热度按时间62o28rlo1#
您需要具有满足PersistentVolumeClaim的约束的PersistentVolume。
使用本地存储。使用本地存储类:
您需要在群集中配置一个默认storageClass,以便PersistentVolumeClaim可以从那里获取存储。
9jyewag02#
我在裸机上运行时遇到了同样的问题,我尝试了@arghya-sadhu提到的存储类方法,但它仍然不起作用。我发现strimzi使用的存储类是一个特定的本地存储类型,如here所述。此外,对于每个副本,您将需要一个不同的存储类和一个具有不同目录的持久卷。
例如,下面的代码片段将为Zookeeper和Kafka创建3个副本。
您需要将“node 2”替换为要向其分配数据的节点的名称。
然后,您需要为每个存储类创建目录,因为它们不能位于同一目录中,否则会出现错误
然后,您可以运行此代码片段来创建存储类和持久卷。
“存储类和pv.yaml”
一旦你创建了它,你就可以部署你的Kafka和Zookeeper集群了。
“范例-Kafka-丛集.yaml”
ni65a41a3#
是的,在我看来,Kubernetes在基础架构级别缺少一些东西。您应该提供PersistentVolumes,用于静态分配到PVC,或者如Arghya所述,您可以提供StorageClasses进行动态分配。
igsr9ssn4#
在我的例子中,我在另一个名称空间
my-cluster-kafka
中创建了Kafka,但是strimzi操作符在名称空间kafka
中。所以我只是在相同的命名空间中创建。为了测试的目的,我使用了一个临时存储。
这里的kafla.yaml: