Kubernetes未在Fedora Server 33上初始化控制平面

gudnpqoy  于 2023-04-05  发布在  Kubernetes
关注(0)|答案(2)|浏览(114)

编辑:所以我一直在努力确保IP流量可以从docker流向kubernetes(参考Docker and Fedora 32),之后我在journalctl错误中进行了更改。
我尝试在Fedora 33虚拟机上运行一个基本的Kubernetes集群,其中我总共有3个节点,我计划放入集群进行一些实验。我有一个基本的Fedora Server 33安装,删除zram swap并安装kubernetes和kubeadm。我已经在firewalld中打开了所有推荐的kubernetes端口,我已经设置了SELinux Policy以允许kubernetes访问(并将SELinux设置为Permissive,直到我运行kubernetes,然后我将验证我可以在稍后强制执行时运行)。
我很确定我在做一些愚蠢的事情,这显然是在手册中,但我只是没有找到它。我不会被冒犯,如果答案是一个链接到手册,但如果可能的话,你能把我链接到手册中的正确位置。谢谢
当我打电话

sudo kubeadm init --config kubeadm-config.yaml

我得到

W0214 15:02:30.550625   14702 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8node1.kube.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.16.50.1 172.16.52.2 172.16.52.2 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8node1.kube.local localhost] and IPs [172.16.52.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8node1.kube.local localhost] and IPs [172.16.52.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0214 15:02:36.332363   14702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0214 15:02:36.340457   14702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0214 15:02:36.341549   14702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

这是配置yaml文件

[k8admin@k8node1 ~]$ cat kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: xb11me.fn9fxtdpg5gxvyso
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.16.52.2
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8node1.kube.local
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 172.16.52.2
  - 127.0.0.1
  extraArgs:
    audit-log-maxage: "2"
    audit-log-path: /etc/kubernetes/audit/kube-apiserver-audit.log
    audit-policy-file: /etc/kubernetes/audit-policy.yaml
    authorization-mode: Node,RBAC
    feature-gates: TTLAfterFinished=true
  extraVolumes:
  - hostPath: /etc/kubernetes/audit-policy.yaml
    mountPath: /etc/kubernetes/audit-policy.yaml
    name: audit-policy
    pathType: File
  - hostPath: /var/log/kubernetes/audit
    mountPath: /etc/kubernetes/audit
    name: audit-volume
    pathType: DirectoryOrCreate
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager:
  extraArgs:
    bind-address: 0.0.0.0
    feature-gates: TTLAfterFinished=true
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
networking:
  dnsDomain: cluster.local
  podSubnet: 172.16.52.0/27
  serviceSubnet: 172.16.50.0/27
scheduler:
  extraArgs:
    bind-address: 0.0.0.0
    feature-gates: TTLAfterFinished=true

systemctl状态的输出

[k8admin@k8node1 ~]$ sudo systemctl status kubelet
[sudo] password for k8admin: 
● kubelet.service - Kubernetes Kubelet Server
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Sun 2021-02-14 15:13:56 CST; 1s ago
       Docs: https://kubernetes.io/docs/concepts/overview/components/#kubelet
             https://kubernetes.io/docs/reference/generated/kubelet/
    Process: 23862 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS (code=>
   Main PID: 23862 (code=exited, status=255/EXCEPTION)
        CPU: 462ms

Feb 14 15:13:56 k8node1.kube.local systemd[1]: kubelet.service: Failed with result 'exit-code'.
...skipping...
● kubelet.service - Kubernetes Kubelet Server
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Sun 2021-02-14 15:13:56 CST; 1s ago
       Docs: https://kubernetes.io/docs/concepts/overview/components/#kubelet
             https://kubernetes.io/docs/reference/generated/kubelet/
    Process: 23862 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS (code=>
   Main PID: 23862 (code=exited, status=255/EXCEPTION)
        CPU: 462ms

Feb 14 15:13:56 k8node1.kube.local systemd[1]: kubelet.service: Failed with result 'exit-code'.

和journalctl的输出

Feb 14 15:15:39 k8node1.kube.local systemd[1]: Started Kubernetes Kubelet Server.
░░ Subject: A start job for unit kubelet.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ A start job for unit kubelet.service has finished successfully.
░░ 
░░ The job identifier is 9970.
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. >
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config f>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. S>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config >
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.242407   25384 server.go:417] Version: v1.18.2
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.242896   25384 plugins.go:100] No cloud provider specified.
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: W0214 15:15:39.255867   25384 server.go:615] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system conta>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: W0214 15:15:39.256032   25384 server.go:622] failed to get the container runtime's cgroup: failed to get container name for docker p>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.344674   25384 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.344880   25384 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.344911   25384 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroup>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.345044   25384 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.345053   25384 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.345058   25384 container_manager_linux.go:306] Creating device plugin manager: true
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.345161   25384 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.345176   25384 client.go:92] Start docker client with request timeout=2m0s
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: W0214 15:15:39.354211   25384 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling ba>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.354255   25384 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.377617   25384 docker_service.go:253] Docker cri networking managed by cni
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.389851   25384 docker_service.go:258] Docker Info: &{ID:KV2D:3HQS:5ENS:ISC6:TJ36:ZZMR:NRFF:74ZF:TWF5:C77P:Y35C:J7AH C>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.389959   25384 docker_service.go:271] Setting cgroupDriver to systemd
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411574   25384 remote_runtime.go:59] parsed scheme: ""
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411595   25384 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411634   25384 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411646   25384 clientconn.go:933] ClientConn switching balancer to "pick_first"
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411713   25384 remote_image.go:50] parsed scheme: ""
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411723   25384 remote_image.go:50] scheme "" not registered, fallback to default scheme
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411737   25384 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411743   25384 clientconn.go:933] ClientConn switching balancer to "pick_first"
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411764   25384 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: I0214 15:15:39.411783   25384 kubelet.go:317] Watching apiserver
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: E0214 15:15:39.428553   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get "h>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: E0214 15:15:39.430406   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get "https://>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: E0214 15:15:39.430608   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get "h>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: E0214 15:15:39.430522   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get "https>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: E0214 15:15:39.431237   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get "https://>
Feb 14 15:15:39 k8node1.kube.local kubelet[25384]: E0214 15:15:39.435296   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get "https>
Feb 14 15:15:41 k8node1.kube.local kubelet[25384]: E0214 15:15:41.261836   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get "h>
Feb 14 15:15:41 k8node1.kube.local kubelet[25384]: E0214 15:15:41.981080   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:526: Failed to list *v1.Node: Get "https://>
Feb 14 15:15:42 k8node1.kube.local kubelet[25384]: E0214 15:15:42.485722   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/kubelet.go:517: Failed to list *v1.Service: Get "https>
Feb 14 15:15:44 k8node1.kube.local kubelet[25384]: E0214 15:15:44.757167   25384 reflector.go:178] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get "h>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.741740   25384 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chai>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.757726   25384 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.13, apiVersion: 1.40.0
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.758340   25384 server.go:1125] Started kubelet
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.758492   25384 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.759568   25384 event.go:269] Unable to write event: 'Post "https://172.16.52.2:6443/api/v1/namespaces/default/events">
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.761982   25384 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.762596   25384 server.go:145] Starting to listen on 0.0.0.0:10250
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.763392   25384 server.go:393] Adding debug handlers to kubelet server.
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.768934   25384 volume_manager.go:265] Starting Kubelet Volume Manager
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.769572   25384 desired_state_of_world_populator.go:139] Desired state populator starts to run
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.769733   25384 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSIDriver: Get "https:>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.769989   25384 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSIDriver: Get "https:>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.771286   25384 controller.go:136] failed to ensure node lease exists, will retry in 200ms, error: Get "https://172.16>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.808541   25384 status_manager.go:158] Starting to sync pod status with apiserver
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.808620   25384 kubelet.go:1821] Starting kubelet main sync loop.
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.808709   25384 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.810015   25384 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.810866   25384 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: W0214 15:15:45.847142   25384 container.go:526] Failed to update stats for container "/": failed to parse memory.usage_in_bytes - op>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.868960   25384 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.878661   25384 kubelet.go:2267] node "k8node1.kube.local" not found
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.906578   25384 kubelet_node_status.go:70] Attempting to register node k8node1.kube.local
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.906975   25384 kubelet_node_status.go:92] Unable to register node "k8node1.kube.local" with API server: Post "https:/>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.909125   25384 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed >
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.971755   25384 controller.go:136] failed to ensure node lease exists, will retry in 400ms, error: Get "https://172.16>
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: E0214 15:15:45.979138   25384 kubelet.go:2267] node "k8node1.kube.local" not found
Feb 14 15:15:45 k8node1.kube.local kubelet[25384]: I0214 15:15:45.980502   25384 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Feb 14 15:15:46 k8node1.kube.local kubelet[25384]: I0214 15:15:46.009365   25384 cpu_manager.go:184] [cpumanager] starting with none policy
Feb 14 15:15:46 k8node1.kube.local kubelet[25384]: I0214 15:15:46.009387   25384 cpu_manager.go:185] [cpumanager] reconciling every 10s
Feb 14 15:15:46 k8node1.kube.local kubelet[25384]: I0214 15:15:46.009429   25384 state_mem.go:36] [cpumanager] initializing new in-memory state store
Feb 14 15:15:46 k8node1.kube.local kubelet[25384]: I0214 15:15:46.009622   25384 state_mem.go:88] [cpumanager] updated default cpuset: ""
Feb 14 15:15:46 k8node1.kube.local kubelet[25384]: I0214 15:15:46.009639   25384 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Feb 14 15:15:46 k8node1.kube.local kubelet[25384]: I0214 15:15:46.009681   25384 policy_none.go:43] [cpumanager] none policy: Start
Feb 14 15:15:46 k8node1.kube.local kubelet[25384]: F0214 15:15:46.009703   25384 kubelet.go:1383] Failed to start ContainerManager failed to get rootfs info: unable to find data in me>
Feb 14 15:15:46 k8node1.kube.local systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ An ExecStart= process belonging to unit kubelet.service has exited.
░░ 
░░ The process' exit code is 'exited' and its exit status is 255.
Feb 14 15:15:46 k8node1.kube.local systemd[1]: kubelet.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
░░ 
░░ The unit kubelet.service has entered the 'failed' state with result 'exit-code'.

这也是我第一次在stackoverflow上发帖,尽管我已经参考这个网站很多年了,甚至在学术论文中引用过它。

ff29svar

ff29svar1#

作为一个修复尝试关闭交换:

$ sudo swapoff -a
$ sudo sed -i '/ swap / s/^/#/' /etc/fstab

然后重新启动您的虚拟机。然后执行命令:

$ kubeadm reset
$ kubeadm init --ignore-preflight-errors all

参见:kubeadm-timeoutkubeadm-swapoff

mzsu5hc0

mzsu5hc02#

尝试卸载负责创建交换的服务

sudo dnf remove zram-generator-defaults

这应该可以解决您的问题。

相关问题