我有一个HorizontalPodAutoscalar来根据CPU缩放我的pod。这里的minReplicas设置为5
:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-web
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-web
minReplicas: 5
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
字符串
然后,我添加了Cron作业,以根据一天中的时间向上/向下扩展我的水平pod autoscaler:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: production
name: cron-runner
rules:
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["patch", "get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: cron-runner
namespace: production
subjects:
- kind: ServiceAccount
name: sa-cron-runner
namespace: production
roleRef:
kind: Role
name: cron-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-cron-runner
namespace: production
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: django-scale-up-job
namespace: production
spec:
schedule: "56 11 * * 1-6"
successfulJobsHistoryLimit: 0 # Remove after successful completion
failedJobsHistoryLimit: 1 # Retain failed so that we see it
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: django-scale-up-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":8}}'
restartPolicy: OnFailure
----
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: django-scale-down-job
namespace: production
spec:
schedule: "30 20 * * 1-6"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 0 # Remove after successful completion
failedJobsHistoryLimit: 1 # Retain failed so that we see it
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: django-scale-down-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":5}}'
restartPolicy: OnFailure
型
这工作得非常好,除了现在我部署它时用HorizontalPodAutoscaler规范中的minReplicas覆盖了这个minReplicas
值(在我的例子中,它被设置为5)
我正在使用kubectl apply -f ~/autoscale.yaml
部署HPA
有没有一种方法可以处理这种情况?我是否需要创建某种共享逻辑,以便我的部署脚本可以计算出minReplicas值应该是多少?或者有没有一种更简单的方法来处理这种情况?
2条答案
按热度按时间zzlelutf1#
我认为你也可以考虑以下两种选择:
使用helm通过lookup功能管理应用的生命周期:
此解决方案背后的主要思想是在尝试使用
helm
install
/upgrade
命令创建/重新创建特定群集资源(此处为HPA
)之前查询该资源的状态。我的意思是每次在升级应用程序堆栈之前检查当前的
minReplicas
值。将
HPA
资源单独管理到应用清单文件在这里,您可以将此任务移交给专用的
HPA
操作员,该操作员可以与您的CronJobs
共存,并根据特定的时间表调整minReplicas
:envsm3lx2#
如何缩小CroneJob不删除.有没有这样的命令?
第一个月
我的cronjob yaml在这里
字符串