我一直在关注this link,在主节点上安装最新的Kubernetes版本,使用Virtualbox设置为VM。在最后安装时,我得到以下错误:
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
cmd/kubeadm/app/cmd/init.go:111
github.com/spf13/cobra.(*Command).execute
vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
vendor/github.com/spf13/cobra/command.go:1040
github.com/spf13/cobra.(*Command).Execute
vendor/github.com/spf13/cobra/command.go:968
k8s.io/kubernetes/cmd/kubeadm/app.Run
cmd/kubeadm/app/kubeadm.go:50
main.main
cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:250
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1598
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
cmd/kubeadm/app/cmd/init.go:111
github.com/spf13/cobra.(*Command).execute
vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
vendor/github.com/spf13/cobra/command.go:1040
github.com/spf13/cobra.(*Command).Execute
vendor/github.com/spf13/cobra/command.go:968
k8s.io/kubernetes/cmd/kubeadm/app.Run
cmd/kubeadm/app/kubeadm.go:50
main.main
cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:250
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1598
我检查了节点是否被添加到/etc/hosts
文件中,它是:
[root@k8s-master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.101 k8s-master
我检查了kubelet
的状态,它连续显示此错误:
Jun 21 06:12:45 k8s-master kubelet[15761]: E0621 06:12:45.495188 15761 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"k8s-master\" not found"
Jun 21 06:12:45 k8s-master kubelet[15761]: I0621 06:12:45.673504 15761 kubelet_node_status.go:70] "Attempting to register node" node="k8s-master"
Jun 21 06:12:48 k8s-master kubelet[15761]: W0621 06:12:48.688019 15761 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.56.101:6443/api/v1/services?limit=500&resourceVersion=0": Gateway Timeout
Jun 21 06:12:48 k8s-master kubelet[15761]: I0621 06:12:48.688961 15761 trace.go:219] Trace[1395883550]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (21-Jun-2023 06:12:30.864) (total time: 17823ms):
Jun 21 06:12:48 k8s-master kubelet[15761]: Trace[1395883550]: ---"Objects listed" error:Get "https://192.168.56.101:6443/api/v1/services?limit=500&resourceVersion=0": Gateway Timeout 17823ms (06:12:48.688)
Jun 21 06:12:48 k8s-master kubelet[15761]: Trace[1395883550]: [17.823852954s] [17.823852954s] END
Jun 21 06:12:48 k8s-master kubelet[15761]: E0621 06:12:48.689001 15761 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.56.101:6443/api/v1/services?limit=500&resourceVersion=0": Gateway Timeout
Jun 21 06:12:48 k8s-master kubelet[15761]: E0621 06:12:48.691662 15761 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://192.168.56.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": Gateway Timeout
Jun 21 06:12:48 k8s-master kubelet[15761]: W0621 06:12:48.691990 15761 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://192.168.56.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": Gateway Timeout
Jun 21 06:12:48 k8s-master kubelet[15761]: I0621 06:12:48.692044 15761 trace.go:219] Trace[1047982787]: "Reflector ListAndWatch" name:vendor/k8s.io/client-go/informers/factory.go:150 (21-Jun-2023 06:12:37.717) (total time: 10974ms):
Jun 21 06:12:48 k8s-master kubelet[15761]: Trace[1047982787]: ---"Objects listed" error:Get "https://192.168.56.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": Gateway Timeout 10974ms (06:12:48.691)
Jun 21 06:12:48 k8s-master kubelet[15761]: Trace[1047982787]: [10.974206467s] [10.974206467s] END
我检查了kubelet config中的cgroup驱动程序,它似乎配置正确:
[root@k8s-master ~]# cat /var/lib/kubelet/config.yaml | grep cgroup
cgroupDriver: systemd
如何解决这一问题?
1条答案
按热度按时间ojsjcaue1#
在安装Containerd运行时的一个步骤中有一个小错误,导致您的容器运行时没有安装,从而导致错误。以下是更正后的步骤,执行此命令后的其余步骤将帮助您解决此问题。
安装容器d
sudo yum update -y &&sudoyum install -y containerd.io