Kubernetes pod的活动探测失败,未显示任何错误消息

mlmc2os5  于 2023-02-11  发布在  Kubernetes
关注(0)|答案(2)|浏览(154)

我在EKS中运行pod,pod中有3个容器。其中一个容器每5分钟重新启动一次,并显示消息“活动探测失败:“。在活动探测中,没有活动探测失败的错误消息。
下面是pod describe的输出

2023-02-07T14:43:00Z   2023-02-07T14:43:00Z   1       default-scheduler   Normal    Scheduled   Successfully assigned <my pod name>/<my pod name>-8ffcd5c5c-5qt7v to ip-10-21-165-115.ap-south-1.compute.i
nternal
2023-02-07T14:43:02Z   2023-02-07T14:43:02Z   1       kubelet             Normal    Pulled      Container image "<my docker repository>/proxyv2:1.12.8-034f0f9b2e-distroless" already present on machine
2023-02-07T14:43:02Z   2023-02-07T14:43:02Z   1       kubelet             Normal    Created     Created container istio-init
2023-02-07T14:43:02Z   2023-02-07T14:43:02Z   1       kubelet             Normal    Started     Started container istio-init
2023-02-07T14:43:03Z   2023-02-07T14:48:06Z   2       kubelet             Normal    Pulled      Container image "<my docker repository >/<my pod name>:1.74.3-SNAPSHOT" already present on machine
2023-02-07T14:43:03Z   2023-02-07T14:48:06Z   2       kubelet             Normal    Created     Created container <my pod name>
2023-02-07T14:43:03Z   2023-02-07T14:43:03Z   1       kubelet             Normal    Started     Started container <my pod name>
2023-02-07T14:43:03Z   2023-02-07T14:43:03Z   1       kubelet             Normal    Pulled      Container image "<my docker repository >/proxyv2:1.12.8-034f0f9b2e-distroless" already present on machine
2023-02-07T14:43:03Z   2023-02-07T14:43:03Z   1       kubelet             Normal    Created     Created container istio-proxy
2023-02-07T14:43:03Z   2023-02-07T14:43:03Z   1       kubelet             Normal    Started     Started container istio-proxy
2023-02-07T14:43:04Z   2023-02-07T14:43:06Z   5       kubelet             Warning   Unhealthy   Readiness probe failed: Get "http://10.21.169.218:15021/healthz/ready": dial tcp 10.21.169.218:15021: connec
t: connection refused
2023-02-07T14:47:31Z   2023-02-07T14:58:02Z   18      kubelet             Warning   Unhealthy   Readiness probe failed:
2023-02-07T14:47:41Z   2023-02-07T14:48:01Z   3       kubelet             Warning   Unhealthy   Liveness probe failed:
2023-02-07T14:48:01Z   2023-02-07T14:48:01Z   1       kubelet             Normal    Killing     Container <my pod name> failed liveness probe, will be restarted

这是我的Dockerfile

FROM openjdk:8-jdk-alpine

ARG JAR_FILE
ARG SERVICE_PORT
ENV JMX_VERSION=0.12.0
ENV GRPC_HEALTH_PROBE_VERSION=v0.4.5
ENV GRPCURL_VERSION=1.8.7

# Install and configure JMX exporter
RUN mkdir -p /opt/jmx
COPY ./devops/jmx-config.yaml /opt/jmx/config.yaml
RUN wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/${JMX_VERSION}/jmx_prometheus_javaagent-${JMX_VERSION}.jar -O /opt/jmx/jmx.jar

# Install grpc_health_probe binary
RUN wget -qO/bin/grpc_health_probe https://github.com/grpc-ecosystem/grpc-health-probe/releases/download/${GRPC_HEALTH_PROBE_VERSION}/grpc_health_probe-linux-amd64 && \
    chmod +x /bin/grpc_health_probe

#Install grpcurl binary
RUN wget -P /tmp/ https://github.com/fullstorydev/grpcurl/releases/download/v${GRPCURL_VERSION}/grpcurl_${GRPCURL_VERSION}_linux_x86_64.tar.gz \
    && tar -xvf /tmp/grpcurl* -C /bin/ \
    && chmod +x /bin/grpcurl \
    && rm -rf /tmp/grpcurl*

#Install jq
RUN apk add jq

# Install .proto file
RUN mkdir -p /lib-grpc-actuator/src/main/proto
COPY ./lib-grpc-actuator/src/main/proto/grpc_health.proto /lib-grpc-actuator/src/main/proto

#Copy bashscript of health check
COPY grpcurl_health.sh /opt/
RUN chmod +x /opt/grpcurl_health.sh

# Expose grpc metric port, jmx exporter port
EXPOSE 9101 9110

COPY ${JAR_FILE} /app.jar

# Expose service port
EXPOSE ${SERVICE_PORT}

CMD java -Dlog4j.configuration=file:/opt/log4j-properties/log4j.properties -XX:+UseG1GC $JAVA_OPTS -javaagent:/opt/jmx/jmx.jar=9101:/opt/jmx/config.yaml -jar -Dconfig-file=/opt/config-properties/config.properties /app.jar

这是我用于活动性和就绪性探测器的shell脚本

#!/bin/sh

#define service grpc port
service_prot=$1

#grpc_health_actuators grpcurl command
response=`/bin/grpcurl \
    -plaintext \
    -import-path /lib-grpc-actuator/src/main/proto/ \
    -proto grpc_health.proto \
    :$service_prot \
    com.<org name>.grpc.generated.grpc_health.HealthCheckService/health`

#grep the status from response
status=`echo $response | jq -r .status`

#echo response
echo $response

#base on status code return script status code
if [ "$status" == "UP" ]
then
    echo "service is healthy : $response"
    exit 0
else
    echo "service is down : $response"
    exit 1
fi

这是我的Kubernetes部署YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "15"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --kubeconfig=config --filename=manifests.yaml --record=true","traffic.sidecar.istio.io/excludeOutboundIPRanges":"*"},"name":"<my pod name>","namespace":"<my pod name>"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"<my pod name>","harness.io/track":"stable"}},"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0},"type":"RollingUpdate"},"template":{"metadata":{"labels":{"app":"<my pod name>","harness.io/release-name":"release-89ef3582-d056-337f-8df0-97a3e7327caa","harness.io/track":"stable","version":"1.74.3-SNAPSHOT"}},"spec":{"containers":[{"env":[{"name":"JAVA_OPTS","value":"-Xms500m -Xmx900m"}],"image":"<my docker registry>/<my pod name>:1.74.3-SNAPSHOT","livenessProbe":{"exec":{"command":["/bin/sh","/opt/grpcurl_health.sh","50045"]},"initialDelaySeconds":20},"name":"<my pod name>","ports":[{"containerPort":50045,"name":"grpc","protocol":"TCP"},{"containerPort":9110,"name":"http-metrics","protocol":"TCP"},{"containerPort":9101,"name":"jmx-metrics","protocol":"TCP"}],"readinessProbe":{"exec":{"command":["/bin/sh","/opt/grpcurl_health.sh","50045"]},"initialDelaySeconds":10},"resources":{"limits":{"cpu":"2","memory":"2Gi"},"requests":{"cpu":"1","memory":"1Gi"}},"volumeMounts":[{"mountPath":"/opt/config-properties","name":"config-properties"},{"mountPath":"/opt/log4j-properties","name":"log4j-properties"}]}],"imagePullSecrets":[{"name":"<my pod name>-dockercfg"}],"serviceAccountName":"backend-services","volumes":[{"configMap":{"name":"config-properties-9"},"name":"config-properties"},{"configMap":{"name":"log4j-properties-9"},"name":"log4j-properties"}]}}}}
    kubernetes.io/change-cause: kubectl apply --kubeconfig=config --filename=manifests.yaml
      --record=true
    traffic.sidecar.istio.io/excludeOutboundIPRanges: '*'
  creationTimestamp: "2023-01-11T19:23:33Z"
  generation: 42
  name: <my pod name>
  namespace: <my pod name>
  resourceVersion: "305338514"
  uid: 4053e956-e28e-4c35-9b84-b50df2a1b8ff
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: <my pod name>
      harness.io/track: stable
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: <my pod name>
        harness.io/release-name: release-89ef3582-d056-337f-8df0-97a3e7327caa
        harness.io/track: stable
        version: 1.74.3-SNAPSHOT
    spec:
      containers:
      - env:
        - name: JAVA_OPTS
          value: -Xms500m -Xmx900m
        image: <my docker registry>/<my pod name>:1.74.3-SNAPSHOT
        imagePullPolicy: IfNotPresent
        livenessProbe:
          exec:
            command:
            - /bin/sh
            - /opt/grpcurl_health.sh
            - "50045"
          failureThreshold: 3
          initialDelaySeconds: 20
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: <my pod name>
        ports:
        - containerPort: 50045
          name: grpc
          protocol: TCP
        - containerPort: 9110
          name: http-metrics
          protocol: TCP
        - containerPort: 9101
          name: jmx-metrics
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - /bin/sh
            - /opt/grpcurl_health.sh
            - "50045"
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: "2"
            memory: 2Gi
          requests:
            cpu: "1"
            memory: 1Gi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /opt/config-properties
          name: config-properties
        - mountPath: /opt/log4j-properties
          name: log4j-properties
        - mountPath: /opt/script-logs
          name: debug
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: <my pod name>-dockercfg
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: backend-services
      serviceAccountName: backend-services
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: config-properties-9
        name: config-properties
      - configMap:
          defaultMode: 420
          name: log4j-properties-9
        name: log4j-properties
      - hostPath:
          path: /tmp/
          type: ""
        name: debug

请帮我解决这个问题。
代替shell脚本,我试着把洞命令在生存探测器和准备探测器如下。但与我得到相同的输出。

sh -c "if [ $(/bin/grpcurl -plaintext -import-path /lib-grpc-actuator/src/main/proto/ -proto grpc_health.proto :50045 com.<my org name>.grpc.generated.grpc_health.HealthCheckService/health | jq -r .status) == 'UP' ]; then exit 0; else echo $(/bin/grpcurl -plaintext -import-path /lib-grpc-actuator/src/main/proto/ -proto grpc_health.proto :50045 com.<my org name>.grpc.generated.grpc_health.HealthCheckService/health) && exit 1; fi"
0g0grzrc

0g0grzrc1#

一些事情:

1. Kubernetes 1.24+包含一个gRPC探测器

您可以:

livenessProbe:
  grpc:
    port: 50045

这对我有用。

2.您的容器映像包含grpc_health_probe,但您没有使用它。它可能是继本机探测(上面)和使用gRPCurl之后的第二选择,这会使您的生活变得更加复杂。

这对我有用。

3.(我已经简化了您的脚本)但您应该考虑使用/bin/sh -c/bin/bash -c,然后将脚本作为字符串提供:
xxxxxxxxProbe:
  exec:
    command:
    - /bin/sh
    - -c
    - "./grpcurl_health.sh :50045"

您的问题中没有明确说明,但我认为您使用的是GRPC Health Checking Protocol的非标准变体。我在重现您的问题时使用了此版本,它返回SERVING作为status值:

#!/bin/env bash

ENDPOINT=${1}

STATUS=$(\
  grpcurl -plaintext  ${ENDPOINT} grpc.health.v1.Health/Check \
  | jq -r .status)

if [ "${STATUS}" == "SERVING" ]
then
    echo "Service is healthy"
    exit 0
else
    echo "service is unhealthy"
    exit 1
fi

我使用ENDPOINT而不是PORT也是为了方便。
这对我有用。

4.在这种情况下,一种有用的调试机制是将kubectl exec导入容器,然后手动运行命令:
# Example 1: not-testable this way

# Example 2
kubectl exec \
--stdin --tty \
deployment/${DEPLOYMENT} \
--namespace=${NAMESPACE} \
--container=${CONTAINER} \
-- grpc_health_probe -addr=:50051
status: SERVING

# Example 3
kubectl exec \
--stdin --tty \
deployment/${DEPLOYMENT} \
--namespace=${NAMESPACE} \
--container=${CONTAINER} \
-- ./grpcurl_health.sh ":50051" && echo ${?}
Service is healthy
0
l3zydbqr

l3zydbqr2#

它看起来像是你的istio容器失败的探针。

2023-02-07T14:43:04Z   2023-02-07T14:43:06Z   5       kubelet             Warning   Unhealthy   Readiness probe failed: Get "http://10.21.169.218:15021/healthz/ready": dial tcp 10.21.169.218:15021: connec
t: connection refused
2023-02-07T14:47:31Z   2023-02-07T14:58:02Z   18      kubelet             Warning   Unhealthy   Readiness probe failed:
2023-02-07T14:47:41Z   2023-02-07T14:48:01Z   3       kubelet             Warning   Unhealthy   Liveness probe failed:

准备就绪探测器在15021上显示连接被拒绝错误,这是istio运行状况检查端口。
部署清单只显示了一个容器。能否共享istio容器配置?

相关问题