django-celery as a systemd service in virtualenv



这是否可以将django-celery守护为virtualenv项目中项目的systemd服务?

这是我的配置:

/etc/systemd/system/celery.service

[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=vagrant
Group=vagrant
Restart=no
WorkingDirectory=/vagrant/myproj/
ExecStart=/bin/sh -c '/var/www/vhost/myproj_env/bin/python  
/vagrant/myproj/manage.py celery worker  
--loglevel=DEBUG 
--logfile=/var/log/celery/worker.log  
--pidfile=/var/run/celery/worker.pid 
-Q availability,celery --time-limit=300'
ExecStop=/bin/sh -c '/var/www/vhost/myproj_env/bin/python  
/vagrant/myproj/manage.py celery stop 
--pidfile=/var/run/celery/worker.pid'
[Install]
WantedBy=multi-user.target

这里提到的所有目录都存在,并且权限设置正确

settings.py 中的姜戈芹菜:

INSTALLED_APPS = (
...
'djcelery',
'celery_haystack',
...
)
import djcelery
djcelery.setup_loader()
CELERYBEAT_SCHEDULER = "djcelery.schedulers.DatabaseScheduler"
BROKER_URL = "redis://localhost:6379/0"
CELERY_RESULT_BACKEND = BROKER_URL
CELERY_REDIS_MAX_CONNECTIONS = 30

以下命令正常启动芹菜,我可以看到正在执行的任务:

python manage.py celery worker --loglevel=INFO -Q availability,celery

这与服务会议中指定的基本相同,只是它发布到 stdout

但是,当我尝试systemctl start celery.service时,它只是静默失败:systemctl status celery.service报告非活动(死(

如果能对这个问题有任何提示,我将不胜感激。我可能会错过一些明显的东西,尽管我觉得这个过程不应该像现在这样复杂((

更新

芹菜日志告诉我们,芹菜正常启动,但由于某种原因,sysctl 不接受它。以下是芹菜日志中的片段,带有 --loglevel=DEBUG:

[2017-07-09 21:55:09,435: DEBUG/MainProcess] | Worker: Preparing bootsteps.
[2017-07-09 21:55:09,439: DEBUG/MainProcess] | Worker: Building graph...
[2017-07-09 21:55:09,439: DEBUG/MainProcess] | Worker: New boot order: {Beat, Timer, Hub, Queues (intra), Pool, Autoreloader, StateDB, Autoscaler, Consumer}
...
[2017-07-09 21:55:10,696: DEBUG/MainProcess] ^-- substep ok
[2017-07-09 21:55:10,696: DEBUG/MainProcess] | Consumer: Starting Heart
[2017-07-09 21:55:10,697: DEBUG/MainProcess] ^-- substep ok
[2017-07-09 21:55:10,697: DEBUG/MainProcess] | Consumer: Starting event loop
[2017-07-09 21:55:10,697: WARNING/MainProcess] /var/www/vhost/myproj_env/local/lib/python2.7/site-packages/djcelery/loaders.py:130: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warn('Using settings.DEBUG leads to a memory leak, never '
[2017-07-09 21:55:10,698: WARNING/MainProcess] celery@vagrant ready.
[2017-07-09 21:55:10,698: DEBUG/MainProcess] | Worker: Hub.register Pool...
[2017-07-09 21:55:10,699: DEBUG/MainProcess] basic.qos: prefetch_count->4

它甚至可以处理队列中的任务!

但是在 30 秒左右后,sysctl 无法理解 worker 实际上是否正常工作,并关闭了它:

Job for celery.service failed because a timeout was exceeded. See "systemctl status celery.service" and "journalctl -xe" for details.
vagrant@vagrant:~$ sudo systemctl status celery.service
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; enabled; vendor preset: enabled)
Active: failed (Result: timeout) since Sun 2017-07-09 21:56:38 UTC; 3min 0s ago
Process: 3139 ExecStart=/bin/sh -c /var/www/vhost/myproj_env/bin/python /vagrant/myproj/manage.py c
Tasks: 0
Memory: 47.4M
CPU: 1.532s
Jul 09 21:55:08 vagrant systemd[1]: Starting Celery Service...
Jul 09 21:56:38 vagrant systemd[1]: celery.service: Start operation timed out. Terminating.
Jul 09 21:56:38 vagrant systemd[1]: Failed to start Celery Service.
Jul 09 21:56:38 vagrant systemd[1]: celery.service: Unit entered failed state.
Jul 09 21:56:38 vagrant systemd[1]: celery.service: Failed with result 'timeout'.

注意:为了简单起见,我将工人数量减少到一个,并编辑了 conf 文件。这使问题更加清晰(这不是芹菜的问题,但可能在systemd中(,但我仍然不知道是什么导致了这种超时。

设法通过从/etc/systemd/system/celery.service中删除Type=forking来修复它

相关内容

  • 没有找到相关文章

最新更新