python 如何在通过gunicorn运行django的docker容器中运行django-apscheduler

jhiyze9q  于 12个月前  发布在  Python
关注(0)|答案(1)|浏览(245)

是否可以在通过gunicorn运行django的docker容器中运行django-apscheduler?目前我遇到的问题是,我的入口点脚本中的自定义manage.py命令永远运行,因此gunicorn永远不会执行。
我的入口点脚本:

#!/bin/sh
python manage.py runapscheduler --settings=core.settings_dev_docker

字符串
我的runapscheduler.py

# runapscheduler.py
import logging

from django.conf import settings

from apscheduler.schedulers.blocking import BlockingScheduler
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from django.core.management.base import BaseCommand
from django_apscheduler.jobstores import DjangoJobStore
from django_apscheduler.models import DjangoJobExecution
from django_apscheduler import util

from backend.scheduler.scheduler import scheduler

logger = logging.getLogger("backend")

def my_job():
    logger.error("Hello World!")
    # Your job processing logic here...
    pass

# The `close_old_connections` decorator ensures that database connections, that have become
# unusable or are obsolete, are closed before and after your job has run. You should use it
# to wrap any jobs that you schedule that access the Django database in any way.
@util.close_old_connections
# TODO: Change max_age to keep old jobs longer
def delete_old_job_executions(max_age=604_800):
    """
    This job deletes APScheduler job execution entries older than `max_age` from the database.
    It helps to prevent the database from filling up with old historical records that are no
    longer useful.

    :param max_age: The maximum length of time to retain historical job execution records.
                    Defaults to 7 days.
    """
    DjangoJobExecution.objects.delete_old_job_executions(max_age)

class Command(BaseCommand):
    help = "Runs APScheduler."

    def handle(self, *args, **options):
        # scheduler = BlockingScheduler(timezone=settings.TIME_ZONE)
        # scheduler.add_jobstore(DjangoJobStore(), "default")

        scheduler.add_job(
            my_job,
            trigger=CronTrigger(minute="*/1"),  # Every 10 seconds
            id="my_job",  # The `id` assigned to each job MUST be unique
            max_instances=1,
            replace_existing=True,
        )
        logger.error("Added job 'my_job'.")

        scheduler.add_job(
            delete_old_job_executions,
            trigger=CronTrigger(
                day_of_week="mon", hour="00", minute="00"
            ),  # Midnight on Monday, before start of the next work week.
            id="delete_old_job_executions",
            max_instances=1,
            replace_existing=True,
        )
        logger.error(
            "Added weekly job: 'delete_old_job_executions'."
        )

        try:
            logger.error("Starting scheduler...")
            scheduler.start()
        except KeyboardInterrupt:
            logger.error("Stopping scheduler...")
            scheduler.shutdown()
            logger.error("Scheduler shut down successfully!")


我的docker容器中的命令如下:

command: gunicorn core.wsgi:application --bind 0.0.0.0:8000


如何正确运行runapscheduler,使gunicorn也在运行?我必须为runapscheduler创建一个单独的进程吗?

56lgkhnf

56lgkhnf1#

我遇到了这个问题,并让它工作。我使用docker-compose来启动这个过程,但这是不相关的:

version: "3.9"

services:
  app:
    container_name: django
    build: .
    command: >
      bash -c "pipenv run python manage.py makemigrations
      && pipenv run python manage.py migrate
      & wait

      pipenv run python manage.py runserver 0.0.0.0:8000
      & pipenv run python manage.py startscheduler"

    volumes:
      - ./xy:/app
    ports:
      - 8000:8000
    environment:
        - HOST=db
    depends_on:
      db:
        condition: service_healthy

字符串
重要的部分是我们提供command的位置:

  • 如果使用&&来链接命令,我的倒数第二个命令将不会退出,因此下一个命令将不会启动
  • 如果您使用&来链接它们,则两者将并行运行

我使用wait等待第一个块运行,然后一起启动调度程序和应用程序Gunicorn服务器。
额外提示:如果在settings.py中配置了日志记录(不依赖于print),则可以将管理命令的日志输出到runserver的日志流中。

相关问题