我希望有人可以帮助我,因为我已经查看了堆栈溢出并且找不到解决问题的方法。 我正在运行一个 Django 项目,并安装了 Supervisor、RabbitMQ 和 Celery。 RabbitMQ 已启动并运行,主管正在确保我的 celerybeat 正在运行,但是,虽然它记录节拍已启动并每 5 分钟发送一次任务(见下文),但任务从未实际执行过:
我的主管计划会议:
[program:nrv_twitter]
; Set full path to celery program if using virtualenv
command=/Users/tsantor/.virtualenvs/nrv_env/bin/celery beat -A app --loglevel=INFO --pidfile=/tmp/nrv-celerybeat.pid --schedule=/tmp/nrv-celerybeat-schedule
; Project dir
directory=/Users/tsantor/Projects/NRV/nrv
; Logs
stdout_logfile=/Users/tsantor/Projects/NRV/nrv/logs/celerybeat_twitter.log
redirect_stderr=true
autorestart=true
autostart=true
startsecs=10
user=tsantor
; if rabbitmq is supervised, set its priority higher so it starts first
priority=999
以下是上述程序的日志输出:
[2014-12-16 20:29:42,293: INFO/MainProcess] beat: Starting...
[2014-12-16 20:34:08,161: INFO/MainProcess] Scheduler: Sending due task gettweets-every-5-mins (twitter.tasks.get_tweets)
[2014-12-16 20:39:08,186: INFO/MainProcess] Scheduler: Sending due task gettweets-every-5-mins (twitter.tasks.get_tweets)
[2014-12-16 20:44:08,204: INFO/MainProcess] Scheduler: Sending due task gettweets-every-5-mins (twitter.tasks.get_tweets)
[2014-12-16 20:49:08,205: INFO/MainProcess] Scheduler: Sending due task gettweets-every-5-mins (twitter.tasks.get_tweets)
[2014-12-16 20:54:08,223: INFO/MainProcess] Scheduler: Sending due task gettweets-every-5-mins (twitter.tasks.get_tweets)
这是我的celery.py
设置文件:
from datetime import timedelta
BROKER_URL = 'amqp://guest:guest@localhost//'
CELERY_DISABLE_RATE_LIMITS = True
CELERYBEAT_SCHEDULE = {
'gettweets-every-5-mins': {
'task': 'twitter.tasks.get_tweets',
'schedule': timedelta(seconds=300) # 300 = every 5 minutes
},
}
这是我的celeryapp.py
:
from __future__ import absolute_import
import os
from django.conf import settings
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'app.settings')
app = Celery('app')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
这是我twitter/tasks.py
:
from __future__ import absolute_import
import logging
from celery import shared_task
from twitter.views import IngestTweets
log = logging.getLogger('custom.log')
@shared_task
def get_tweets():
"""
Get tweets and save them to the DB
"""
instance = IngestTweets()
IngestTweets.get_new_tweets(instance)
log.info('Successfully ingested tweets via celery task')
return True
get_tweets
方法永远不会被执行,但是我知道它可以工作,因为我可以手动执行get_tweets
并且工作正常。
我花了两天时间试图弄清楚为什么它发送了应有的任务,但没有执行它们? 任何帮助将不胜感激。提前谢谢。
> user2097159 感谢您为我指出正确的方向,我不知道我还必须使用主管运行工人。 我以为要么是工人,要么是节拍,但现在我明白了,我必须有一个工人来处理任务,并且有一个节拍来定期触发任务。
以下是主管缺少的工作器配置:
[program:nrv_celery_worker]
; Worker
command=/Users/tsantor/.virtualenvs/nrv_env/bin/celery worker -A app --loglevel=INFO
; Project dir
directory=/Users/tsantor/Projects/NRV/nrv
; Logs
stdout_logfile=/Users/tsantor/Projects/NRV/nrv/logs/celery_worker.log
redirect_stderr=true
autostart=true
autorestart=true
startsecs=10
user=tsantor
numprocs=1
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
然后我重置了 RabbitMQ 队列。 现在我已经通过主管管理节拍和工人程序,一切都按预期工作。 希望这对其他人有所帮助。
您需要同时启动工作进程和节拍进程。您可以按照 tsantor 的答案中所述创建单独的进程,也可以创建同时包含工作线程和节拍的单个进程。这在开发过程中可能更方便(但不建议用于生产)。
从 Celery 文档中的"启动调度程序":
您还可以通过启用 worker -B 选项在工作线程中嵌入 beat,如果您永远不会运行多个工作节点,这很方便,但它不常用,因此不建议用于生产用途:
$ 芹菜 -A 项目工人 -B
有关 Supervisor 配置文件中的表达式,请参阅 https://github.com/celery/celery/tree/master/extra/supervisord/(链接自"守护程序")