是否可以在通过gunicorn运行django的docker容器中运行django-apscheduler?目前我遇到的问题是,我的入口点脚本中的自定义manage.py命令永远运行,因此gunicorn永远不会执行。
我的入口点脚本:
#!/bin/sh
python manage.py runapscheduler --settings=core.settings_dev_docker
字符串
我的runapscheduler.py
# runapscheduler.py
import logging
from django.conf import settings
from apscheduler.schedulers.blocking import BlockingScheduler
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from django.core.management.base import BaseCommand
from django_apscheduler.jobstores import DjangoJobStore
from django_apscheduler.models import DjangoJobExecution
from django_apscheduler import util
from backend.scheduler.scheduler import scheduler
logger = logging.getLogger("backend")
def my_job():
logger.error("Hello World!")
# Your job processing logic here...
pass
# The `close_old_connections` decorator ensures that database connections, that have become
# unusable or are obsolete, are closed before and after your job has run. You should use it
# to wrap any jobs that you schedule that access the Django database in any way.
@util.close_old_connections
# TODO: Change max_age to keep old jobs longer
def delete_old_job_executions(max_age=604_800):
"""
This job deletes APScheduler job execution entries older than `max_age` from the database.
It helps to prevent the database from filling up with old historical records that are no
longer useful.
:param max_age: The maximum length of time to retain historical job execution records.
Defaults to 7 days.
"""
DjangoJobExecution.objects.delete_old_job_executions(max_age)
class Command(BaseCommand):
help = "Runs APScheduler."
def handle(self, *args, **options):
# scheduler = BlockingScheduler(timezone=settings.TIME_ZONE)
# scheduler.add_jobstore(DjangoJobStore(), "default")
scheduler.add_job(
my_job,
trigger=CronTrigger(minute="*/1"), # Every 10 seconds
id="my_job", # The `id` assigned to each job MUST be unique
max_instances=1,
replace_existing=True,
)
logger.error("Added job 'my_job'.")
scheduler.add_job(
delete_old_job_executions,
trigger=CronTrigger(
day_of_week="mon", hour="00", minute="00"
), # Midnight on Monday, before start of the next work week.
id="delete_old_job_executions",
max_instances=1,
replace_existing=True,
)
logger.error(
"Added weekly job: 'delete_old_job_executions'."
)
try:
logger.error("Starting scheduler...")
scheduler.start()
except KeyboardInterrupt:
logger.error("Stopping scheduler...")
scheduler.shutdown()
logger.error("Scheduler shut down successfully!")
型
我的docker容器中的命令如下:
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
型
如何正确运行runapscheduler,使gunicorn也在运行?我必须为runapscheduler创建一个单独的进程吗?
1条答案
按热度按时间56lgkhnf1#
我遇到了这个问题,并让它工作。我使用
docker-compose
来启动这个过程,但这是不相关的:字符串
重要的部分是我们提供
command
的位置:&&
来链接命令,我的倒数第二个命令将不会退出,因此下一个命令将不会启动&
来链接它们,则两者将并行运行我使用
wait
等待第一个块运行,然后一起启动调度程序和应用程序Gunicorn服务器。额外提示:如果在
settings.py
中配置了日志记录(不依赖于print
),则可以将管理命令的日志输出到runserver
的日志流中。