I am running a Dev Linux machine and setting up a local Kafka for development on Kubernetes(moving from docker-compose for learning and practicing pourposes) with Kind and everything works fine but I am now trying to map volumes from Kafka and Zookeeper to the host but I am only able to for the Kafka volume. For zookeeper I configure and map the data and log paths to a volume but the internal directories are not being exposed on the host(which happens with the kafka mapping), it only shows the data and log folders but no content is actually present on the host so restarting zookeeper resets state.
I am wondering if there's a limitation or a different approach when using Kind and mapping multiples directories from different pods, what am I missing? Why only Kafka volumes are successfully persisted on host.
The full setup with a readme on how to run it's on Github under pv-pvc-setup
folder.
Zookeeper meaningful configuration, Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: zookeeper
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
service: zookeeper
strategy: {}
template:
metadata:
labels:
network/kafka-network: "true"
service: zookeeper
spec:
containers:
- env:
- name: TZ
- name: ZOOKEEPER_CLIENT_PORT
value: "2181"
- name: ZOOKEEPER_DATA_DIR
value: "/var/lib/zookeeper/data"
- name: ZOOKEEPER_LOG_DIR
value: "/var/lib/zookeeper/log"
- name: ZOOKEEPER_SERVER_ID
value: "1"
image: confluentinc/cp-zookeeper:7.0.1
name: zookeeper
ports:
- containerPort: 2181
resources: {}
volumeMounts:
- mountPath: /var/lib/zookeeper
name: zookeeper-data
hostname: zookeeper
restartPolicy: Always
volumes:
- name: zookeeper-data
persistentVolumeClaim:
claimName: zookeeper-pvc
Persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: zookeeper-local-storage
resources:
requests:
storage: 5Gi
Persistent volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-pv
spec:
accessModes:
- ReadWriteOnce
storageClassName: zookeeper-local-storage
capacity:
storage: 5Gi
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /var/lib/zookeeper
kind-config:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 30092 # internal kafka nodeport
hostPort: 9092 # port exposed on "host" machine for kafka
- containerPort: 30081 # internal schema-registry nodeport
hostPort: 8081 # port exposed on "host" machine for schema-registry
extraMounts:
- hostPath: ./tmp/kafka-data
containerPath: /var/lib/kafka/data
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
- hostPath: ./tmp/zookeeper-data
containerPath: /var/lib/zookeeper
readOnly: false
selinuxRelabel: false
propagation: Bidirectional
As I mentioned the setup works, I am now just trying to make sure relevant kafka and zookeeper volumes are mapped to persistent external storage(in this case a local disk).
1条答案
按热度按时间fjnneemd1#
我终于把它整理出来了。我在最初的设置中有两个主要问题,现在都修复了。
需要事先创建用于在本地主机上持久保存数据的文件夹,以便它们与用于创建初始Kind群集的文件夹具有相同的
uid:guid
,如果没有这样做,文件夹将无法正确持久保存数据。从zookeeper(数据和日志)为每个持久文件夹创建特定的持久卷和持久卷声明,并在kind-config上配置它们。
如果您想好玩地运行它,本存储库中提供了使用持久卷和持久卷声明的完整设置以及进一步的说明。https://github.com/mmaia/kafka-local-kubernetes