Kubernetes版本
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-13T02:40:46Z", GoVersion:"go1.16.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"e1d093448d0ed9b9b1a48f49833ff1ee64c05ba5", GitTreeState:"clean", BuildDate:"2021-06-03T00:20:57Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
我有一个Kubernetes crobjob,它的目的是按照基于时间的计划运行一些Azure服务器命令。
在本地运行容器可以正常工作,但是,通过Lens手动触发Cronjob,或者让它按照计划运行会导致奇怪的行为(在云中作为作业运行会产生意想不到的结果)。
下面是cronjob的定义:
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: development-scale-down
namespace: development
spec:
schedule: "0 22 * * 0-4"
concurrencyPolicy: Allow
startingDeadlineSeconds: 60
failedJobsHistoryLimit: 5
jobTemplate:
spec:
backoffLimit: 0 # Do not retry
activeDeadlineSeconds: 360 # 5 minutes
template:
spec:
containers:
- name: scaler
image: myimage:latest
imagePullPolicy: Always
env: ...
restartPolicy: "Never"
我手动运行cronjob,它创建了作业development-scale-down-manual-xwp1k
。描述这个工作完成后,我们可以看到以下几点:
$ kubectl describe job development-scale-down-manual-xwp1k
Name: development-scale-down-manual-xwp1k
Namespace: development
Selector: controller-uid=ecf8fb47-cd50-42eb-9a6f-888f7e2c9257
Labels: controller-uid=ecf8fb47-cd50-42eb-9a6f-888f7e2c9257
job-name=development-scale-down-manual-xwp1k
Annotations: <none>
Parallelism: 1
Completions: 1
Start Time: Wed, 04 Aug 2021 09:40:28 +1200
Active Deadline Seconds: 360s
Pods Statuses: 0 Running / 0 Succeeded / 1 Failed
Pod Template:
Labels: controller-uid=ecf8fb47-cd50-42eb-9a6f-888f7e2c9257
job-name=development-scale-down-manual-xwp1k
Containers:
scaler:
Image: myimage:latest
Port: <none>
Host Port: <none>
Environment:
CLUSTER_NAME: ...
NODEPOOL_NAME: ...
NODEPOOL_SIZE: ...
RESOURCE_GROUP: ...
SP_APP_ID: <set to the key 'application_id' in secret 'scaler-secrets'> Optional: false
SP_PASSWORD: <set to the key 'application_pass' in secret 'scaler-secrets'> Optional: false
SP_TENANT: <set to the key 'application_tenant' in secret 'scaler-secrets'> Optional: false
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 24m job-controller Created pod: development-scale-down-manual-xwp1k-b858c
Normal SuccessfulCreate 23m job-controller Created pod: development-scale-down-manual-xwp1k-xkkw9
Warning BackoffLimitExceeded 23m job-controller Job has reached the specified backoff limit
这与other issues I have read不同,在other issues I have read中,它没有提到“错误删除”事件。
从kubectl get events
收到的事件讲述了一个有趣的故事
$ ktl get events | grep xwp1k
3m19s Normal Scheduled pod/development-scale-down-manual-xwp1k-b858c Successfully assigned development/development-scale-down-manual-xwp1k-b858c to aks-burst-37275452-vmss00000d
3m18s Normal Pulling pod/development-scale-down-manual-xwp1k-b858c Pulling image "myimage:latest"
2m38s Normal Pulled pod/development-scale-down-manual-xwp1k-b858c Successfully pulled image "myimage:latest" in 40.365655229s
2m23s Normal Created pod/development-scale-down-manual-xwp1k-b858c Created container myimage
2m23s Normal Started pod/development-scale-down-manual-xwp1k-b858c Started container myimage
2m12s Normal Killing pod/development-scale-down-manual-xwp1k-b858c Stopping container myimage
2m12s Normal Scheduled pod/development-scale-down-manual-xwp1k-xkkw9 Successfully assigned development/development-scale-down-manual-xwp1k-xkkw9 to aks-default-37275452-vmss000002
2m12s Normal Pulling pod/development-scale-down-manual-xwp1k-xkkw9 Pulling image "myimage:latest"
2m11s Normal Pulled pod/development-scale-down-manual-xwp1k-xkkw9 Successfully pulled image "myimage:latest" in 751.93652ms
2m10s Normal Created pod/development-scale-down-manual-xwp1k-xkkw9 Created container myimage
2m10s Normal Started pod/development-scale-down-manual-xwp1k-xkkw9 Started container myimage
3m19s Normal SuccessfulCreate job/development-scale-down-manual-xwp1k Created pod: development-scale-down-manual-xwp1k-b858c
2m12s Normal SuccessfulCreate job/development-scale-down-manual-xwp1k Created pod: development-scale-down-manual-xwp1k-xkkw9
2m1s Warning BackoffLimitExceeded job/development-scale-down-manual-xwp1k Job has reached the specified backoff limit
我不知道为什么容器被杀死,日志似乎都很好,没有资源限制。容器被很快地移除,这意味着我只有很少的时间来调试。更详细的事件行如下所示
3m54s Normal Killing pod/development-scale-down-manual-xwp1k-b858c spec.containers{myimage} kubelet, aks-burst-37275452-vmss00000d Stopping container myimage 3m54s 1 development-scale-down-manual-xwp1k-b858c.1697e9d5e5b846ef
我注意到图像拉取最初需要几秒钟(40秒),这是否有助于超过startingDeadline或其他cron规格?
任何想法或帮助赞赏,谢谢你
2条答案
按热度按时间50pmv0ei1#
阅读日志!总是很有帮助。
上下文
对于上下文,作业本身缩放AKS节点池。我们有两个,默认的
system
,和一个新的用户控制的。cronjob旨在扩展新的user
(而不是system
池)。调查中
我注意到
scale-down
作业总是比scale-up
作业花费更长的时间,这是由于当按比例缩小作业运行时总是发生图像拉取。我还注意到上面提到的
Killing
事件源于kubelet。(kubectl get events -o wide
)我去检查主机上的kubelet日志,发现主机名有点不典型(
aks-burst-XXXXXXXX-vmss00000d
),因为我们小型开发集群中的大多数主机通常在末尾都有数字,而不是d
在那里我意识到命名是不同的,因为这个节点不是默认节点池的一部分,而且我无法检查kubelet日志,因为主机已经被删除。
原因
作业会缩减计算资源。向下扩展会失败,因为它总是被向上扩展所提前,此时集群中有一个新节点。此节点上没有任何正在运行的内容,因此下一个作业已在其上调度。作业在新节点上启动,告诉Azure将新节点缩小到0,随后Kubelet在运行时杀死了作业。
总是在新节点上调度,这也解释了为什么每次都会发生映像拉取。
修复
我更改了规范,并添加了一个Node Server,以便作业始终在
system
池上运行,该池比user
池更稳定cld4siwp2#
我们的基础设施也面临同样的问题。
在这种情况下,根本原因是集群自动缩放,它杀死和删除作业,以便缩小集群并释放一个(或多个)节点。
在这种情况下,我们使用“safe-to-evic”注解来解决这个问题,该注解可以防止k8由于自动缩放器而终止作业。
https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md
在我们的例子中,我们在查看了该名称空间的k8s“事件”之后就将问题归咎于autoscaler。