以前我的权限都很好。升级到EKS 1.25后,我在执行kubectl logs pod -n namespace
时开始出现以下错误
我试着调试它。我看了一下CockMap、clusterRole和RoleBinding。我没有看到任何明显的问题(实际上我创建这些对象已经两年了,也许我现在在最新版本的Kubernetes中缺少了一些东西?))
发生内部错误:授权错误(user=kube-apiserver-kubelet-client,verb=get,resource=nodes,subresource=proxy)
aws-auth验证码Map
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::<some-number>:role/eksctl-<xyz-abs>-nodegrou-NodeInstanceRole-DMQXBTLLXHNU
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- userarn: arn:aws:iam::043519645107:user/kube-developer
username: kube-developer
groups:
- kube-developer
kind: ConfigMap
metadata:
creationTimestamp: "2020-07-03T16:55:08Z"
name: aws-auth
namespace: kube-system
resourceVersion: "104191269"
uid: 844f189d-b3d6-4204-bf85-7b789c0ee91a
角色和角色绑定
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kube-developer-cr
rules:
- apiGroups: ["*"]
resources:
- configmaps
- endpoints
- events
- ingresses
- ingresses/status
- services
verbs:
- create
- get
- list
- update
- watch
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-developer-crb
subjects:
- kind: Group
name: kube-developer
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: kube-developer-cr
apiGroup: rbac.authorization.k8s.io
向下钻取运行中的pod时出错x1c 0d1x
- 编辑-
我尝试使用与错误消息kube-apiserver-kubelet-client
中抛出的用户相同的用户来创建MonterRoleBinding,并为其分配roleRef kubelet-api-admin,仍然得到相同的问题。
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-apiserver
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kubelet-api-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kube-apiserver-kubelet-client
- 编辑-
调试的第二天,我启动了另一个EKS示例。我发现它有CSR(证书签名请求),而我的EKS缺少CSR。
1条答案
按热度按时间vm0i2vca1#
我在升级EKS的时候也出现了同样的症状。我已经升级了EKS,添加了运行新kubelet版本的节点,但没有将运行的工作负载移动到新节点,因此出现了错误消息。我让它工作时,我:
1.将运行旧k8s版本节点的示例移动到“StandBy”(使用aws控制台,但也可以在CLI中)
1.耗尽节点并使K8将它们调度到新节点上。我用
kubectl drain <node>