裸金属上Kubernetes簇的kubeadm研究

cbjzeqam  于 2022-11-02  发布在  Kubernetes
关注(0)|答案(2)|浏览(178)

我尝试在3个裸机节点(1个主节点和2个工作节点)上创建一个单一的控制平面集群,运行在Debian 10上,使用Docker作为容器运行时。每个节点都有一个外部IP和内部IP。我想在内部网络上配置一个集群,并可以从互联网访问。为此使用了以下命令(如果有错误,请纠正我):

kubeadm init --control-plane-endpoint=10.10.0.1 --apiserver-cert-extra-sans={public_DNS_name},10.10.0.1 --pod-network-cidr=192.168.0.0/16

我得到了:

kubectl get no -o wide
NAME                           STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION   CONTAINER-RUNTIME
dev-k8s-master-0.public.dns    Ready    master   16h   v1.18.2   10.10.0.1     <none>        Debian GNU/Linux 10 (buster)   4.19.0-8-amd64   docker://19.3.8

初始化阶段已成功完成,并且可以从Internet访问群集。除应在应用网络后运行的核心之外,所有Pod均已启动并正在运行。

kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

应用网络后,核心网单元仍未就绪:

kubectl get po -A
NAMESPACE     NAME                                                   READY   STATUS             RESTARTS   AGE
kube-system   calico-kube-controllers-75d56dfc47-g8g9g               0/1     CrashLoopBackOff   192        16h
kube-system   calico-node-22gtx                                      1/1     Running            0          16h
kube-system   coredns-66bff467f8-87vd8                               0/1     Running            0          16h
kube-system   coredns-66bff467f8-mv8d9                               0/1     Running            0          16h
kube-system   etcd-dev-k8s-master-0                                  1/1     Running            0          16h
kube-system   kube-apiserver-dev-k8s-master-0                        1/1     Running            0          16h
kube-system   kube-controller-manager-dev-k8s-master-0               1/1     Running            0          16h
kube-system   kube-proxy-lp6b8                                       1/1     Running            0          16h
kube-system   kube-scheduler-dev-k8s-master-0                        1/1     Running            0          16h

来自故障pod的一些日志:

kubectl -n kube-system logs calico-kube-controllers-75d56dfc47-g8g9g
2020-04-22 08:24:55.853 [INFO][1] main.go 88: Loaded configuration from environment config=&config.Config{LogLevel:"info", ReconcilerPeriod:"5m", CompactionPeriod:"10m", EnabledControllers:"node", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", HealthEnabled:true, SyncNodeLabels:true, DatastoreType:"kubernetes"}
2020-04-22 08:24:55.855 [INFO][1] k8s.go 228: Using Calico IPAM
W0422 08:24:55.855525       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2020-04-22 08:24:55.856 [INFO][1] main.go 109: Ensuring Calico datastore is initialized
2020-04-22 08:25:05.857 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation="default" error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2020-04-22 08:25:05.857 [FATAL][1] main.go 114: Failed to initialize Calico datastore error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded

核心:

[INFO] plugin/ready: Still waiting on: "kubernetes"
I0422 08:29:12.275344       1 trace.go:116] Trace[1050055850]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.274382393 +0000 UTC m=+59491.429700922) (total time: 30.000897581s):
Trace[1050055850]: [30.000897581s] [30.000897581s] END
E0422 08:29:12.275388       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0422 08:29:12.276163       1 trace.go:116] Trace[188478428]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.275499997 +0000 UTC m=+59491.430818380) (total time: 30.000606394s):
Trace[188478428]: [30.000606394s] [30.000606394s] END
E0422 08:29:12.276198       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0422 08:29:12.277424       1 trace.go:116] Trace[16697023]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-04-22 08:28:42.276675998 +0000 UTC m=+59491.431994406) (total time: 30.000689778s):
Trace[16697023]: [30.000689778s] [30.000689778s] END
E0422 08:29:12.277452       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"

有什么想法吗?

mnemlml8

mnemlml81#

这个回答是为了提醒大家注意@弗洛林的建议:
当我在节点上有多个公共接口,而calico选择了错误的接口时,我也看到过类似的行为。
我所做的是在calico配置中设置IP_AUTODETECT_METHOD

用于自动检测此主机的IPv4地址的方法。只有在自动检测IPv4地址时才使用此方法。有关有效方法的详细信息,请参阅IP自动检测方法。
点击此处了解更多信息:https://docs.projectcalico.org/reference/node/configuration#ip-autodetection-methods

rbpvctlc

rbpvctlc2#

我也面临着同样的问题,但以下是为我工作,尝试在您的主节点。

$ sudo iptables -P INPUT ACCEPT
  $ sudo iptables -P FORWARD ACCEPT
  $ sudo iptables -P FORWARD ACCEPT
  $ sudo iptables -F

相关问题