使用Kubernetes CronJob进行基于时间的扩展:如何避免部署覆盖minReplicas

mqkwyuun  于 2023-11-17  发布在  Kubernetes
关注(0)|答案(2)|浏览(86)

我有一个HorizontalPodAutoscalar来根据CPU缩放我的pod。这里的minReplicas设置为5

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-web
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp-web
  minReplicas: 5 
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 50

字符串
然后,我添加了Cron作业,以根据一天中的时间向上/向下扩展我的水平pod autoscaler:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: production
  name: cron-runner
rules:
- apiGroups: ["autoscaling"]
  resources: ["horizontalpodautoscalers"]
  verbs: ["patch", "get"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: cron-runner
  namespace: production
subjects:
- kind: ServiceAccount
  name: sa-cron-runner
  namespace: production
roleRef:
  kind: Role
  name: cron-runner
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: sa-cron-runner
  namespace: production
---

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: django-scale-up-job
  namespace: production
spec:
  schedule: "56 11 * * 1-6"
  successfulJobsHistoryLimit: 0 # Remove after successful completion
  failedJobsHistoryLimit: 1 # Retain failed so that we see it
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: sa-cron-runner
          containers:
          - name: django-scale-up-job
            image: bitnami/kubectl:latest
            command:
            - /bin/sh
            - -c
            - kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":8}}'
          restartPolicy: OnFailure
----
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: django-scale-down-job
  namespace: production
spec:
  schedule: "30 20 * * 1-6"
  concurrencyPolicy: Forbid
  successfulJobsHistoryLimit: 0 # Remove after successful completion
  failedJobsHistoryLimit: 1 # Retain failed so that we see it
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: sa-cron-runner
          containers:
          - name: django-scale-down-job
            image: bitnami/kubectl:latest
            command:
            - /bin/sh
            - -c
            - kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":5}}'
          restartPolicy: OnFailure


这工作得非常好,除了现在我部署它时用HorizontalPodAutoscaler规范中的minReplicas覆盖了这个minReplicas值(在我的例子中,它被设置为5)
我正在使用kubectl apply -f ~/autoscale.yaml部署HPA
有没有一种方法可以处理这种情况?我是否需要创建某种共享逻辑,以便我的部署脚本可以计算出minReplicas值应该是多少?或者有没有一种更简单的方法来处理这种情况?

zzlelutf

zzlelutf1#

我认为你也可以考虑以下两种选择:

使用helm通过lookup功能管理应用的生命周期:

此解决方案背后的主要思想是在尝试使用helminstall/upgrade命令创建/重新创建特定群集资源(此处为HPA)之前查询该资源的状态。

  • *Helm.sh:Helm:图表模板指南:函数和管道:使用查找函数 *

我的意思是每次在升级应用程序堆栈之前检查当前的minReplicas值。

HPA资源单独管理到应用清单文件

在这里,您可以将此任务移交给专用的HPA操作员,该操作员可以与您的CronJobs共存,并根据特定的时间表调整minReplicas

envsm3lx

envsm3lx2#

如何缩小CroneJob不删除.有没有这样的命令?
第一个月
我的cronjob yaml在这里

--
apiVersion: batch/v1
kind: CronJob
metadata:
  name: tms-cron
  namespace: develop
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          imagePullSecrets:
            - name: s-jenkins
          containers:
            - name: tms-cronjob
              image: docker-c2.example.com/netcore/netcore-v:v0.BUILDVERSION
              imagePullPolicy: IfNotPresent
              envFrom:
                - secretRef:
                    name: netcore-secrets
                - configMapRef:
                    name: netcore-configmap
              command:
                - php
              args:
                - artisan
                - schedule:run
          restartPolicy: OnFailure

字符串

相关问题