kubernetes 为什么我的K8s(多节点Minikube)上的RabbitMQ集群不能创建它的mnesia目录?

w8ntj3qf  于 2023-01-20  发布在  Kubernetes
关注(0)|答案(1)|浏览(176)

我正在尝试获取在本地Minikube示例中创建的RabbitMQ群集(当前为单节点)。但是,尝试在具有两个节点的Minikube上创建RMQ群集时似乎存在权限问题。
先决条件:
1.安装Minikube、kubectl和krew。
重现步骤:
1.启动Minikube minikube start --memory 8192 --cpus 4 --nodes 2

😄  minikube v1.27.1 on Debian bookworm/sid
✨  Automatically selected the docker driver
📌  Using Docker driver with root privileges
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=4, Memory=8192MB) ...
🐳  Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass

👍  Starting worker node minikube-m02 in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=4, Memory=8192MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.49.2
🐳  Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
    ▪ env NO_PROXY=192.168.49.2
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

1.将RabbitMQ插件安装到kubectl kubectl krew install rabbitmq
1.将RabbitMQ集群操作器安装到Minikube kubectl rabbitmq install-cluster-operator
1.使用运算符kubectl rabbitmq create default创建默认的单节点群集
这将导致创建永久卷(PV)、永久卷声明(PVC)、状态集和服务。在等待PVC连接到PV后,Pod创建完毕。但是,Pod出现以下控制台错误,并且Pod开始崩溃循环:

2023-01-19 16:31:42.993395+00:00 [warning] <0.130.0> Failed to write PID file "/var/lib/rabbitmq/mnesia/rabbit@default-server-0.default-nodes.default.pid": permission denied
2023-01-19 16:31:43.520923+00:00 [info] <0.221.0> Feature flags: list of feature flags found:
2023-01-19 16:31:43.520982+00:00 [info] <0.221.0> Feature flags:   [ ] classic_mirrored_queue_version
2023-01-19 16:31:43.521013+00:00 [info] <0.221.0> Feature flags:   [ ] implicit_default_bindings
2023-01-19 16:31:43.521060+00:00 [info] <0.221.0> Feature flags:   [ ] maintenance_mode_status
2023-01-19 16:31:43.521087+00:00 [info] <0.221.0> Feature flags:   [ ] quorum_queue
2023-01-19 16:31:43.521118+00:00 [info] <0.221.0> Feature flags:   [ ] stream_queue
2023-01-19 16:31:43.521147+00:00 [info] <0.221.0> Feature flags:   [ ] user_limits
2023-01-19 16:31:43.521186+00:00 [info] <0.221.0> Feature flags:   [ ] virtual_host_metadata
2023-01-19 16:31:43.521204+00:00 [info] <0.221.0> Feature flags: feature flag states written to disk: yes
2023-01-19 16:31:43.688848+00:00 [notice] <0.44.0> Application syslog exited with reason: stopped
2023-01-19 16:31:43.689010+00:00 [notice] <0.221.0> Logging: switching to configured handler(s); following messages may not be visible in this log output
2023-01-19 16:31:43.697297+00:00 [notice] <0.221.0> Logging: configured log handlers are now ACTIVE
2023-01-19 16:31:43.715974+00:00 [error] <0.221.0> 
2023-01-19 16:31:43.715974+00:00 [error] <0.221.0> BOOT FAILED
2023-01-19 16:31:43.715974+00:00 [error] <0.221.0> ===========
2023-01-19 16:31:43.715974+00:00 [error] <0.221.0> Error during startup: {error,
2023-01-19 16:31:43.715974+00:00 [error] <0.221.0>                           {cannot_create_mnesia_dir,
2023-01-19 16:31:43.715974+00:00 [error] <0.221.0>                               "/var/lib/rabbitmq/mnesia/rabbit@default-server-0.default-nodes.default/",
2023-01-19 16:31:43.715974+00:00 [error] <0.221.0>                               eacces}}
2023-01-19 16:31:43.715974+00:00 [error] <0.221.0> 
BOOT FAILED
===========
Error during startup: {error,
                          {cannot_create_mnesia_dir,
                              "/var/lib/rabbitmq/mnesia/rabbit@default-server-0.default-nodes.default/",
                              eacces}}
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>   crasher:
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>     initial call: application_master:init/4
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>     pid: <0.220.0>
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>     registered_name: []
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>     exception exit: {{cannot_create_mnesia_dir,
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>                          "/var/lib/rabbitmq/mnesia/rabbit@default-server-0.default-nodes.default/",
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>                          eacces},
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>                      {rabbit,start,[normal,[]]}}
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>       in function  application_master:init/4 (application_master.erl, line 142)
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>     ancestors: [<0.219.0>]
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>     message_queue_len: 1
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>     messages: [{'EXIT',<0.221.0>,normal}]
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>     links: [<0.219.0>,<0.44.0>]
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>     dictionary: []
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>     trap_exit: true
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>     status: running
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>     heap_size: 987
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>     stack_size: 29
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>     reductions: 158
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0>   neighbours:
2023-01-19 16:31:44.716751+00:00 [error] <0.220.0> 
2023-01-19 16:31:44.720886+00:00 [notice] <0.44.0> Application rabbit exited with reason: {{cannot_create_mnesia_dir,"/var/lib/rabbitmq/mnesia/rabbit@default-server-0.default-nodes.default/",eacces},{rabbit,start,[normal,[]]}}
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{{cannot_create_mnesia_dir,\"/var/lib/rabbitmq/mnesia/rabbit@default-server-0.default-nodes.default/\",eacces},{rabbit,start,[normal,[]]}}}"} 
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{{cannot_create_mnesia_dir,"/var/lib/rabbitmq/mnesia/rabbit@default-server-0.default-nodes.default/",eacces},{rabbit,start,[normal,[]]}}}) 
 
Crash dump is being written to: /var/log/rabbitmq/erl_crash.dump...done
    • 但是**,如果我删除了我的Minikube(minikube stop && minikube delete),然后用一个节点(minikube start --memory 8192 --cpus 4 --nodes 1)重新创建它,并按照前面提到的步骤创建默认的RabbitMQ集群,那么就没有问题了。我不明白为什么添加第二个节点到Minikube会导致这个问题。

我觉得我只是错过了一些明显的东西,但不知道是什么。
任何建议或反馈将不胜感激。请让我知道,如果有更多的细节,我应该提供。谢谢你提前!

xqkwcwgp

xqkwcwgp1#

在典型的“我”的方式,我终于找到了正确的短语来搜索和获得结果。原来与minikube捆绑在一起的存储配置程序并不真正适用于2+节点。替换为另一个(kubevirt),如这条GitHub问题评论所解释的,可以让Pod正确旋转。
为了增加更多的上下文,以防某一天链接出错:我创建了一个kubevirt-hostpath-provisioner.yaml文件(内容如下),然后替换了minikube中的存储配置程序:

minikube addons disable storage-provisioner
kubectl delete storageclass standard
kubectl apply -f kubevirt-hostpath-provisioner.yaml
# kubevirt-hostpath-provisioner.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubevirt.io/hostpath-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubevirt-hostpath-provisioner
subjects:
  - kind: ServiceAccount
    name: kubevirt-hostpath-provisioner-admin
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: kubevirt-hostpath-provisioner
  apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubevirt-hostpath-provisioner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]

  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]

  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubevirt-hostpath-provisioner-admin
  namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kubevirt-hostpath-provisioner
  labels:
    k8s-app: kubevirt-hostpath-provisioner
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: kubevirt-hostpath-provisioner
  template:
    metadata:
      labels:
        k8s-app: kubevirt-hostpath-provisioner
    spec:
      serviceAccountName: kubevirt-hostpath-provisioner-admin
      containers:
        - name: kubevirt-hostpath-provisioner
          image: quay.io/kubevirt/hostpath-provisioner
          imagePullPolicy: Always
          env:
            - name: USE_NAMING_PREFIX
              value: "false" # change to true, to have the name of the pvc be part of the directory
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: PV_DIR
              value: /tmp/hostpath-provisioner
          volumeMounts:
            - name: pv-volume # root dir where your bind mounts will be on the node
              mountPath: /tmp/hostpath-provisioner/
              #nodeSelector:
              #- name: xxxxxx
      volumes:
        - name: pv-volume
          hostPath:
            path: /tmp/hostpath-provisioner/

相关问题