kubernetes 如何将命名空间分配给特定节点?

hyrbngr7  于 2022-12-03  发布在  Kubernetes
关注(0)|答案(3)|浏览(145)

有没有办法在名称空间级别配置nodeSelector
我希望仅在此命名空间的某些节点上运行工作负荷。

nnvyjq4y

nnvyjq4y1#

To achieve this you can use PodNodeSelector admission controller.
First, you need to enable it in your kubernetes-apiserver:

  • Edit /etc/kubernetes/manifests/kube-apiserver.yaml :
  • find --enable-admission-plugins=
  • add PodNodeSelector parameter

Now, you can specify scheduler.alpha.kubernetes.io/node-selector option in annotations for your namespace, example:

apiVersion: v1
kind: Namespace
metadata:
 name: your-namespace
 annotations:
   scheduler.alpha.kubernetes.io/node-selector: env=test
spec: {}
status: {}

After these steps, all the pods created in this namespace will have this section automatically added:

nodeSelector
  env: test

More information about the PodNodeSelector you can find in the official Kubernetes documentation: https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podnodeselector

kubeadm users

If you deployed your cluster using kubeadm and if you want to make this configuration persistent, you have to update your kubeadm config file:

kubectl edit cm -n kube-system kubeadm-config

specify extraArgs with custom values under apiServer section:

apiServer: 
  extraArgs: 
    enable-admission-plugins: NodeRestriction,PodNodeSelector

then update your kube-apiserver static manifest on all control-plane nodes:

# Kubernetes 1.22 and forward:
kubectl get configmap -n kube-system kubeadm-config -o=jsonpath="{.data}" > kubeadm-config.yaml

# Before Kubernetes 1.22:
# "kubeadmin config view" was deprecated in 1.19 and removed in 1.22
# Reference: https://github.com/kubernetes/kubeadm/issues/2203
kubeadm config view > kubeadm-config.yaml

# Update the manifest with the file generated by any of the above lines 
kubeadm init phase control-plane apiserver --config kubeadm-config.yaml

kubespray users

You can just use kube_apiserver_enable_admission_plugins variable for your api-server configuration variables:

kube_apiserver_enable_admission_plugins:
   - PodNodeSelector
jhdbpxl9

jhdbpxl92#

我完全同意@kvaps的回答,但缺少了一些东西:必须在节点中添加一个标签:

kubectl label node <yournode> env=test

这样,在名称空间中创建的带有scheduler.alpha.kubernetes.io/node-selector: env=test的pod将只能在带有env=test标签的节点上调度

pgvzfuti

pgvzfuti3#

将节点专用于仅托管属于命名空间的资源,您还必须 * 防止在这些节点上调度其他资源 *。
这可以通过podSelectortaint的组合来实现,当你在命名空间中创建资源时,通过准入控制器注入。这样,你不必手动标记和添加每个资源的容差,但在命名空间中创建它们就足够了。
物业目标:

  • podSelector * 仅在选定节点上强制调度资源 *
  • taint * 拒绝调度任何不在选定节点 * 上的名称空间中的资源

节点/节点池的配置

将污点添加到要专用于名称空间的节点:

kubectl taint nodes project.example.com/GPUsNodePool=true:NoSchedule -l=nodesWithGPU=true

此示例将污点添加到已具有标签nodesWithGPU=true的节点。您也可以按名称逐个污点节点:kubectl taint node my-node-name project.example.com/GPUsNodePool=true:NoSchedule
添加标签:

kubectl label nodes project.example.com/GPUsNodePool=true -l=nodesWithGPU=true

例如,如果您使用Terraform和AKS,也会执行相同的操作。节点池配置:

resource "azurerm_kubernetes_cluster_node_pool" "GPUs_node_pool" {
   name                  = "gpusnp"
   kubernetes_cluster_id = azurerm_kubernetes_cluster.clustern_name.id
   vm_size               = "Standard_NC12" # https://azureprice.net/vm/Standard_NC12
   node_taints = [
       "project.example.com/GPUsNodePool=true:NoSchedule"
   ]
   node_labels = {
       "project.example.com/GPUsNodePool" = "true"
   }
   node_count = 2
}

命名空间创建

然后使用准入控制器的说明创建命名空间:

apiVersion: v1
kind: Namespace
metadata:
  name: gpu-namespace
  annotations:
    scheduler.alpha.kubernetes.io/node-selector: "project.example.com/GPUsNodePool=true"  # poorly documented: format has to be of "selector-label=label-val"
    scheduler.alpha.kubernetes.io/defaultTolerations: '[{"operator": "Equal", "value": "true", "effect": "NoSchedule", "key": "project.example.com/GPUsNodePool"}]'
    project.example.com/description: 'This namespace is dedicated only to resources that need a GPU.'

完成!在命名空间中创建资源,准入控制器和调度程序将完成其余工作。

测试

创建一个示例pod,不带标签或公差,但位于命名空间中:

kubectl run test-dedicated-ns --image=nginx --namespace=gpu-namespace

# get nodes and nodes
kubectl get po -n gpu-namespace

# get node name 
kubectl get po test-dedicated-ns -n gpu-namespace -o jsonpath='{.spec.nodeName}'

# check running pods on a node
kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=<node>

相关问题