Some time ago EKS managed node group was created, at the time it created the role and the nodes, fast forward to today, the same role is used in other nodegroups in other clusters, the cloudformation stack has the nodegroup and the roles, whenever y try to delete the first cluster managed nodegroup it gives a delete error because is trying to delete an used role.
So, how can i delete the nodegroup and keep the role?
EDIT: Important information, the aws-auth configmap was modified and the cluster now is unreachable
1条答案
按热度按时间b1uwtaje1#
Amazon EKS first sets your Auto Scaling group's minimum, maximum, and desired sizes to zero before you delete a managed node group.After that, your node group shrinks as a result of this.Amazon EKS sends a signal to drain the pods from that node before terminating each instance and then waits a few minutes.Amazon EKS permits Auto Scaling to continue the instance's termination if the pods have not drained after a few minutes.The Auto Scaling group is removed when each instance is terminated.
The role is removed from the aws-auth ConfigMap when a managed node group that uses a node IAM role that is not used by any other managed node group in the cluster is deleted.Self-managed nodes move to the NotReady status if any of the cluster's self-managed node groups are using the same node IAM role.Additionally, the operations of the cluster are disrupted.To minimize disruption, you can add the mapping back to the ConfigMap.
Refer this document for more information