---
# Service account the client will use to reset the deployment,
# by default the pods running inside the cluster can do no such things.
kind: ServiceAccount
apiVersion: v1
metadata:
name: deployment-restart
namespace: <YOUR NAMESPACE>
---
# allow getting status and patching only the one deployment you want
# to restart
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: deployment-restart
namespace: <YOUR NAMESPACE>
rules:
- apiGroups: ["apps", "extensions"]
resources: ["deployments"]
resourceNames: ["<YOUR DEPLOYMENT NAME>"]
verbs: ["get", "patch", "list", "watch"] # "list" and "watch" are only needed
# if you want to use `rollout status`
---
# bind the role to the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: deployment-restart
namespace: <YOUR NAMESPACE>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: deployment-restart
subjects:
- kind: ServiceAccount
name: deployment-restart
namespace: <YOUR NAMESPACE>
还有cronjob规范本身:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: deployment-restart
namespace: <YOUR NAMESPACE>
spec:
concurrencyPolicy: Forbid
schedule: '0 8 * * *' # cron spec of time, here, 8 o'clock
jobTemplate:
spec:
backoffLimit: 2 # this has very low chance of failing, as all this does
# is prompt kubernetes to schedule new replica set for
# the deployment
activeDeadlineSeconds: 600 # timeout, makes most sense with
# "waiting for rollout" variant specified below
template:
spec:
serviceAccountName: deployment-restart # name of the service
# account configured above
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl # probably any kubectl image will do,
# optionaly specify version, but this
# should not be necessary, as long the
# version of kubectl is new enough to
# have `rollout restart`
command:
- 'kubectl'
- 'rollout'
- 'restart'
- 'deployment/<YOUR DEPLOYMENT NAME>'
7条答案
按热度按时间lc8prwob1#
使用cronjob,但不是运行pod,而是调度Kubernetes API命令,该命令将每天重新启动部署(
kubectl rollout restart
)。这样,如果出现问题,旧pod将不会关闭或删除。滚动创建新的副本集,并等待它们启动,然后关闭旧的pod,并重新路由流量。服务将不间断地继续。
您必须设置RBAC,以便从集群内部运行的Kubernetes客户机具有对Kubernetes API执行所需调用的权限。
还有cronjob规范本身:
或者,如果希望cronjob等待展开,请将cronjob命令更改为:
mftmpeh82#
对于重新启动策略为Always(不应处理cron作业-请参见创建cron作业规范pod模板)的pod,另一个快捷但不常用的选项是livenessProbe,它仅测试时间并按指定计划重新启动pod
时间粒度取决于您如何返回日期和测试**;)**
当然,如果您已经将活动探测器用作
actual
活动探测器,则此操作不起作用¹_()_/¹s2j5cfk03#
我借用了@Ryan Lowe的想法,但做了一些修改。它将重新启动超过24小时的吊舱
jckbn6z74#
有一个专门的资源:CronJob
下面是一个例子:
如果要在启动新pod时替换旧pod,请将
spec.concurrencyPolicy
更改为Replace
。使用Forbid
时,如果旧pod仍在运行,则将跳过新pod创建。qlfbtfca5#
根据cronjob-in-kubernetes-to-restart-delete-the-pod-in-a-deployment,您可以创建一个
kind: CronJob
,其中jobTemplate
具有containers
。因此,您的CronJob将以一天的activeDeadlineSeconds
启动这些容器(直到重新启动)。根据您的示例,对于上午8:00,它将是schedule: 0 8 * * ?
ryevplcw6#
我们可以通过修改CRON作业中部署的清单文件(每3小时传递一个随机参数)来实现这一点:
我们专门使用Spinnaker触发部署:
我们在Spinnaker中创建了一个CRON作业,如下所示:
配置步骤如下所示:
修补程序清单如下所示:(当YAML改变时,K8S重新启动PODS,以计数检查柱底部)
由于可能存在所有Pod同时重启的情况,从而导致停机,因此我们有一个滚动重启策略,其中maxUnavailablePods为0%
这会产生新的pod,然后终止旧的pod。
q8l4jmvw7#
其中,86400是以秒为单位的所需时间段(本例中为每天重启1次)