如何使用django正确地守护celeybeat



有几个问题与此类似,但都没有合适的解决方案,也没有描述完全相同的问题。

如果我自己从命令行启动芹菜,周期性任务可以很好地使用我的配置,如下所示:

celery --app=proj.mycelery worker -B

问题是当我试图守护芹菜时。在完成本教程之后,我用启动服务

sudo /etc/init.d/celerybeat start

它似乎开始得很好,但设置为每5秒执行一次的周期性任务并没有发生。

这些是Django设置中我的芹菜设置。py:

BROKER_URL = 'amqp://guest:guest@localhost//'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'

这是我的/etc/default/celeybeat配置:

# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/burzum/.pyenv/versions/old_django/bin/celery"
# App instance to use
CELERY_APP="proj.mycelery"
# Where to chdir at start.
CELERYBEAT_CHDIR="/home/burzum/repos/proj/"
# Extra arguments to celerybeat
CELERYBEAT_OPTS="--schedule=/var/run/celery/celerybeat-schedule"
export DJANGO_SETTINGS_MODULE="proj.settings"
CELERYD_CHDIR="/home/burzum/repos/proj"

/etc/init.d/celeybeat文件与教程(本教程)中的文件相同。我只是在开头添加了以下行:

export PYTHONPATH='/home/burzum/repos'

/var/log/celece/beat.log的输出为:

[2015-11-23 09:15:18,304: INFO/MainProcess] beat: Starting...
[2015-11-23 09:15:23,307: INFO/MainProcess] Scheduler: Sending due task reports.tasks.test_periodic_task (reports.tasks.test_periodic_task)
[2015-11-23 09:15:28,310: INFO/MainProcess] Scheduler: Sending due task reports.tasks.test_periodic_task (reports.tasks.test_periodic_task)

所以,看起来周期性任务正在被调用,但什么都没有发生。

sudo/etc/init.d/celeybeat状态的输出为:

celery init v10.1.
Using configuration: , /etc/default/celerybeat
celerybeat (pid 11696) is up...

使用sudo sh-x/etc/init.d/celerybeat start启动服务的输出为:

+ VERSION=10.1
+ export PYTHONPATH=/home/burzum/repos
+ echo celery init v10.1.
celery init v10.1.
+ id -u
+ [ 0 -ne 0 ]
+ [ -L /etc/init.d/celerybeat ]
+ SCRIPT_FILE=/etc/init.d/celerybeat
+ basename /etc/init.d/celerybeat
+ SCRIPT_NAME=celerybeat
+ scripts=
+ test -f /etc/default/celeryd
+ EXTRA_CONFIG=/etc/default/celerybeat
+ test -f /etc/default/celerybeat
+ scripts=, /etc/default/celerybeat
+ _config_sanity /etc/default/celerybeat
+ local path=/etc/default/celerybeat
+ ls -ld /etc/default/celerybeat
+ awk {print $3}
+ local owner=root
+ ls -ld /etc/default/celerybeat+ 
cut -b 6
+ local iwgrp=-
+ ls -ld+  /etc/default/celerybeat
cut -b 9
+ local iwoth=-
+ id -u root
+ [ 0 != 0 ]
+ [ - != - ]
+ [ - != - ]
+ . /etc/default/celerybeat
+ CELERY_BIN=/home/burzum/.pyenv/versions/old_django/bin/celery
+ CELERY_APP=proj.mycelery
+ CELERYBEAT_CHDIR=/home/burzum/repos/proj/
+ CELERYBEAT_OPTS=--schedule=/var/run/celery/celerybeat-schedule
+ export DJANGO_SETTINGS_MODULE=proj.settings
+ CELERYD_CHDIR=/home/burzum/repos/proj
+ echo Using configuration: , /etc/default/celerybeat
Using configuration: , /etc/default/celerybeat
+ CELERY_BIN=/home/burzum/.pyenv/versions/old_django/bin/celery
+ DEFAULT_USER=celery
+ DEFAULT_PID_FILE=/var/run/celery/beat.pid
+ DEFAULT_LOG_FILE=/var/log/celery/beat.log
+ DEFAULT_LOG_LEVEL=INFO
+ DEFAULT_CELERYBEAT=/home/burzum/.pyenv/versions/old_django/bin/celery beat
+ CELERYBEAT=/home/burzum/.pyenv/versions/old_django/bin/celery beat
+ CELERYBEAT_LOG_LEVEL=INFO
+ CELERY_APP_ARG=
+ [ ! -z proj.mycelery ]
+ CELERY_APP_ARG=--app=proj.mycelery
+ CELERYBEAT_USER=celery
+ CELERY_CREATE_DIRS=0
+ CELERY_CREATE_RUNDIR=0
+ CELERY_CREATE_LOGDIR=0
+ [ -z  ]
+ CELERYBEAT_PID_FILE=/var/run/celery/beat.pid
+ CELERY_CREATE_RUNDIR=1
+ [ -z  ]
+ CELERYBEAT_LOG_FILE=/var/log/celery/beat.log
+ CELERY_CREATE_LOGDIR=1
+ export CELERY_LOADER
+ CELERYBEAT_OPTS=--schedule=/var/run/celery/celerybeat-schedule -f /var/log/celery/beat.log -l INFO
+ [ -n  ]
+ dirname /var/log/celery/beat.log
+ CELERYBEAT_LOG_DIR=/var/log/celery
+ dirname /var/run/celery/beat.pid
+ CELERYBEAT_PID_DIR=/var/run/celery
+ CELERYBEAT_CHDIR=/home/burzum/repos/proj/
+ [ -n /home/burzum/repos/proj/ ]
+ DAEMON_OPTS= --workdir=/home/burzum/repos/proj/
+ export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/sbin:/sbin
+ check_dev_null
+ [ ! -c /dev/null ]
+ check_paths
+ [ 1 -eq 1 ]
+ create_default_dir /var/log/celery
+ [ ! -d /var/log/celery ]
+ [ 1 -eq 1 ]
+ create_default_dir /var/run/celery
+ [ ! -d /var/run/celery ]
+ start_beat
+ echo Starting celerybeat...
Starting celerybeat...
+ _chuid --app=proj.mycelery --schedule=/var/run/celery/celerybeat-schedule -f /var/log/celery/beat.log -l INFO --workdir=/home/burzum/repos/proj/ --detach --pidfile=/var/run/celery/beat.pid
+ su celery -c /home/burzum/.pyenv/versions/old_django/bin/celery beat --app=proj.mycelery --schedule=/var/run/celery/celerybeat-schedule -f /var/log/celery/beat.log -l INFO --workdir=/home/burzum/repos/proj/ --detach --pidfile=/var/run/celery/beat.pid
+ exit 0

最后,这是一个工作示例。

  1. 制作celey_service.conf
  2. 制作celery.service

例如,使用Centossystemctl运行服务。就是这样。

我添加了有用的脚本:

  • https://github.com/trianglesis/celery-heplers

我对此也没有正确的答案,并且阅读了大量的教程,其中复制了&粘贴相同的:

备注-B用于开发目的。对于生产环境,需要单独开始打芹菜。

没有人真正展示过如何实现这一目标的真实例子。我认为选项

CELERYD_NODES="节拍"

会成功的,但没用。因此,只需将选项-B或--beat添加到您的"celeryd"配置中,就会使这个单独的工作进程进入真正的beat状态。

CELERYD_OPTS="--beat--scheduler=django_cely_beat.schedulers:DatabaseScheduler"

乌干达人民国防军:我发现,这是一个文件问题,描述如下:https://github.com/celery/celery/issues/4304

在这里,您可以看到通过systemd正确实现beat的示例:https://specialistoff.net/question/238

我现在已经检查过了——beat查找和工作,无论如何都需要一些时间进行测试。

我想您有一个带有systemd的linux服务器。使用systemd,而不是";通用初始化脚本";。

有systemd的文档,但它们低于旧的init.d方式:

http://docs.celeryproject.org/en/master/userguide/daemonizing.html#usage-系统

引用文档:

这是一个示例systemd文件:

/etc/systemd/system/celery.service:

[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=celery
Group=celery
EnvironmentFile=/etc/conf.d/celery
WorkingDirectory=/opt/celery
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} 
  -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} 
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} 
  --pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} 
  -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} 
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target

相关内容

  • 没有找到相关文章

最新更新