kubernetes Pod处于挂起阶段(错误:计划失败:节点与节点选择器不匹配)

uajslkp6  于 2023-01-25  发布在  Kubernetes
关注(0)|答案(3)|浏览(211)

我有一个吊舱有问题,它说它处于挂起状态。
如果我描述一下这个圆荚,这就是我所看到的:

Events:
  Type     Reason             Age                From                Message
  ----     ------             ----               ----                -------
  Normal   NotTriggerScaleUp  1m (x58 over 11m)  cluster-autoscaler  pod didn't trigger scale-up (it wouldn't fit if a new node is added): 2 node(s) didn't match node selector
  Warning  FailedScheduling   1m (x34 over 11m)  default-scheduler   0/6 nodes are available: 6 node(s) didn't match node selector.

如果我检查日志,里面什么都没有(它只是输出空值)。
这是我的pod yaml文件

apiVersion: v1
kind: Pod
metadata:
  annotations:
    checksum/config: XXXXXXXXXXX
    checksum/dashboards-config: XXXXXXXXXXX
  creationTimestamp: 2020-02-11T10:15:15Z
  generateName: grafana-654667db5b-
  labels:
    app: grafana-grafana
    component: grafana
    pod-template-hash: "2102238616"
    release: grafana
  name: grafana-654667db5b-tnrlq
  namespace: monitoring
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: grafana-654667db5b
    uid: xxxx-xxxxx-xxxxxxxx-xxxxxxxx
  resourceVersion: "98843547"
  selfLink: /api/v1/namespaces/monitoring/pods/grafana-654667db5b-tnrlq
  uid: xxxx-xxxxx-xxxxxxxx-xxxxxxxx
spec:
  containers:
  - env:
    - name: GF_SECURITY_ADMIN_USER
      valueFrom:
        secretKeyRef:
          key: xxxx
          name: grafana
    - name: GF_SECURITY_ADMIN_PASSWORD
      valueFrom:
        secretKeyRef:
          key: xxxx
          name: grafana
    - name: GF_INSTALL_PLUGINS
      valueFrom:
        configMapKeyRef:
          key: grafana-install-plugins
          name: grafana-config
    image: grafana/grafana:5.0.4
    imagePullPolicy: Always
    name: grafana
    ports:
    - containerPort: 3000
      protocol: TCP
    readinessProbe:
      failureThreshold: 3
      httpGet:
        path: /api/health
        port: 3000
        scheme: HTTP
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 30
    resources:
      requests:
        cpu: 200m
        memory: 100Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/grafana
      name: config-volume
    - mountPath: /var/lib/grafana/dashboards
      name: dashboard-volume
    - mountPath: /var/lib/grafana
      name: storage-volume
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-tqb6j
      readOnly: true
  dnsPolicy: ClusterFirst
  initContainers:
  - command:
    - sh
    - -c
    - cp /tmp/config-volume-configmap/* /tmp/config-volume 2>/dev/null || true; cp
      /tmp/dashboard-volume-configmap/* /tmp/dashboard-volume 2>/dev/null || true
    image: busybox
    imagePullPolicy: Always
    name: copy-configs
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /tmp/config-volume-configmap
      name: config-volume-configmap
    - mountPath: /tmp/dashboard-volume-configmap
      name: dashboard-volume-configmap
    - mountPath: /tmp/config-volume
      name: config-volume
    - mountPath: /tmp/dashboard-volume
      name: dashboard-volume
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-tqb6j
      readOnly: true
  nodeSelector:
    nodePool: cluster
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 300
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: config-volume
  - emptyDir: {}
    name: dashboard-volume
  - configMap:
      defaultMode: 420
      name: grafana-config
    name: config-volume-configmap
  - configMap:
      defaultMode: 420
      name: grafana-dashs
    name: dashboard-volume-configmap
  - name: storage-volume
    persistentVolumeClaim:
      claimName: grafana
  - name: default-token-tqb6j
    secret:
      defaultMode: 420
      secretName: default-token-tqb6j
status:
  conditions:
  - lastProbeTime: 2020-02-11T10:45:37Z
    lastTransitionTime: 2020-02-11T10:15:15Z
    message: '0/6 nodes are available: 6 node(s) didn''t match node selector.'
    reason: Unschedulable
    status: "False"
    type: PodScheduled
  phase: Pending
  qosClass: Burstable

你知道我应该如何进一步调试这个吗?

yzckvree

yzckvree1#

解决方案:您可以执行以下两项操作之一来允许调度程序完成pod创建请求。
1.你可以选择从你的pod yaml中删除这些行,然后从头开始创建pod(如果你需要一个选择器的话,请按照下一个步骤2的方法进行)

nodeSelector: 
    nodePool: cluster


1.您可以确保将此nodePool: cluster作为标签添加到所有节点,以便使用可用的选择器调度pod。
可以使用此命令标注所有结点
kubectl label nodes <your node name> nodePool=cluster
运行上述命令,方法是从群集详细信息中替换每个节点的节点名称,或仅替换要使用此标签选择的节点。

xtfmy6hx

xtfmy6hx2#

你的pod可能使用了一个节点选择器,而这个节点选择器不能被调度器完成。检查pod描述中是否有类似的内容

apiVersion: v1
    kind: Pod
    metadata:
      name: nginx
      labels:
        env: test
    spec:
      ...
      nodeSelector:
        disktype: ssd

并检查是否相应地标记了节点。

cig3rfwq

cig3rfwq3#

最简单的选择是在Pod yaml中使用“nodeName”。
首先,获取要运行Pod的节点:

kubectl get nodes

在Pod定义(yaml)中使用below属性,以便Pod仅在下面提到的节点下运行。

nodeName: seliiuvd05714

相关问题