芹菜 redis-用延迟触发任务时Django块



我在django项目上设置了芹菜,并使用redis设置了芹菜。预定的任务正在没有问题。当使用delay()触发异步任务时,问题就会出现。执行停止,就像在kombu.utils.retry_over_time的循环中被阻止。

我检查了,Redis正在启动并运行。我真的不知道如何调试这个问题。

这是一些软件包版本

Django==2.1.2
celery==4.2.1
django-celery-beat==1.4.0
django-celery-results==1.0.4
redis==3.2.0
kombu==4.4.0

设置

CELERY_REDIS_HOST = 'localhost'
CELERY_REDIS_PORT = 6379
CELERY_REDIS_DB = 1 # # Redis DB number, if not provided the default will be 0
CELERY_REDIS_PASSWORD = ''
CELERY_BEAT_SCHEDULER = 'django_celery_beat.schedulers:DatabaseScheduler'
CELERY_BROKER_URL = 'redis://{host}:{port}/{db}'.format(host=CELERY_REDIS_HOST, port=CELERY_REDIS_PORT, db=CELERY_REDIS_DB)
CELERY_RESULT_BACKEND = 'django-db'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json' # Result serialization format
CELERY_TASK_SERIALIZER = 'json' # String identifying the serializer to be used
CELERY_BROKER_TRANSPORT_OPTIONS = {
    'visibility_timeout': 3600, # 1 hour, default Redis visibility timeout
}

如何启动芹菜和芹菜节拍

将芹菜和芹菜节拍添加到主管的外壳脚本:

#!/usr/bin/env bash
# Create required directories
sudo mkdir -p /var/log/celery/
sudo mkdir -p /var/run/celery/
# Create group called 'celery'
sudo groupadd -f celery
# add the user 'celery' if it doesn't exist and add it to the group with same name
id -u celery &>/dev/null || sudo useradd -g celery celery
# add permissions to the celery user for r+w to the folders just created
sudo chown -R celery:celery /var/log/celery/
sudo chown -R celery:celery /var/run/celery/
# Get django environment variables
celeryenv=`cat ./env_vars | tr 'n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}
# Create CELERY configuration script
celeryconf="[program:celeryd]
directory=/home/ubuntu/splityou/splityou
; Set full path to celery program if using virtualenv
command=/home/ubuntu/splityou/splityou-env/bin/celery worker -A config.celery.celery_app:app --loglevel=INFO --logfile="/var/log/celery/%%n%%I.log" --pidfile="/var/run/celery/%%n.pid"
user=celery
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"

# Create CELERY BEAT configuration script
celerybeatconf="[program:celerybeat]
; Set full path to celery program if using virtualenv
command=/home/ubuntu/splityou/splityou-env/bin/celery beat -A config.celery.celery_app:app --loglevel=INFO --logfile="/var/log/celery/celery-beat.log" --pidfile="/var/run/celery/celery-beat.pid"
directory=/home/ubuntu/splityou/splityou
user=celery
numprocs=1
stdout_logfile=/var/log/celerybeat.log
stderr_logfile=/var/log/celerybeat.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=999
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /etc/supervisor/conf.d/celery.conf
echo "$celerybeatconf" | tee /etc/supervisor/conf.d/celerybeat.conf
# Enable supervisor to listen for HTTP/XML-RPC requests.
# supervisorctl will use XML-RPC to communicate with supervisord over port 9001.
# Source: https://askubuntu.com/questions/911994/supervisorctl-3-3-1-http-localhost9001-refused-connection
if ! grep -Fxq "[inet_http_server]" /etc/supervisor/supervisord.conf
  then
    echo "[inet_http_server]" | tee -a /etc/supervisor/supervisord.conf
    echo "port = 127.0.0.1:9001" | tee -a /etc/supervisor/supervisord.conf
fi
# Reread the supervisord config
sudo supervisorctl reread
# Update supervisord in cache without restarting all services
sudo supervisorctl update
# Sleep for 15 seconds to give enough time to previous supervisor instance to shutdown
# Source: https://stackoverflow.com/questions/50135628/celery-django-on-elastic-beanstalk-causing-error-class-xmlrpclib-fault/50154073#50154073
sleep 15
# Start/Restart celeryd through supervisord
sudo supervisorctl restart celeryd
sudo supervisorctl restart celerybeat

正如Django芹菜教程在第一步中指出的,我们必须在proj/__init__.py模块中导入App对象。它确保在Django启动时始终导入该应用程序,以便shared_task使用相同的应用。

我完全忘记了它,所以我通过将__init__.py放入以下内容来解决问题:

from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app
__all__ = ('celery_app',)

相关内容

  • 没有找到相关文章

最新更新