芹菜工人崩溃时,我启动了另一个



我将django与芹菜和redis一起使用。我希望有三个队列和三个工人。

我在settings.py中的芹菜设置如下:

CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Europe/Berlin'
# CELERY QUEUES SETUP
CELERY_DEFAULT_QUEUE = 'default'
CELERY_DEFAULT_ROUTING_KEY = 'default'
CELERY_TASK_QUEUES = (
Queue('default', Exchange('default'), routing_key='default'),
Queue('manually_crawl', Exchange('manually_crawl'), routing_key='manually_crawl'),
Queue('periodically_crawl', Exchange('periodically_crawl'), routing_key='periodically_crawl'),
)
CELERY_ROUTES = {
'api.tasks.crawl_manually': {'queue': 'manually_crawl', 'routing_key': 'manually_crawl',},
'api.tasks.crawl_periodically': {'queue': 'periodically_crawl', 'routing_key': 'periodically_crawl',},
'api.tasks.crawl_firsttime': {'queue': 'default', 'routing_key': 'default',},
}

稍后,我将使用celener-multi启动worker,但在开发阶段,我希望手动启动worker以查看错误

我用redis-server启动redis服务器,然后用启动第一个工作默认值

celery -A proj worker -Q default -l debug -n default_worker

如果我尝试在一个新的终端中启动下一个工人:

celery -A proj worker -Q manually_crawl -l debug -n manually_crawl

我在第一个默认的工作终端中得到一个错误:

[2019-10-28 09:32:58,284: INFO/MainProcess] sync with celery@manually_crawl
[2019-10-28 09:32:58,290: ERROR/MainProcess] Control command error: OperationalError("nCannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists.nProbably the key ('_kombu.binding.reply.celery.pidbox') has been removed from the Redis database.n")
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/kombu/connection.py", line 439, in _reraise_as_library_errors
yield
File "/usr/local/lib/python3.7/dist-packages/kombu/connection.py", line 518, in _ensured
return fun(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/kombu/messaging.py", line 203, in _publish
mandatory=mandatory, immediate=immediate,
File "/usr/local/lib/python3.7/dist-packages/kombu/transport/virtual/base.py", line 605, in basic_publish
message, exchange, routing_key, **kwargs
File "/usr/local/lib/python3.7/dist-packages/kombu/transport/virtual/exchange.py", line 70, in deliver
for queue in _lookup(exchange, routing_key):
File "/usr/local/lib/python3.7/dist-packages/kombu/transport/redis.py", line 877, in _lookup
exchange, redis_key))
kombu.exceptions.InconsistencyError:
Cannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists.
Probably the key ('_kombu.binding.reply.celery.pidbox') has been removed from the Redis database.

During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/celery/worker/pidbox.py", line 46, in on_message
self.node.handle_message(body, message)
File "/usr/local/lib/python3.7/dist-packages/kombu/pidbox.py", line 145, in handle_message
return self.dispatch(**body)
File "/usr/local/lib/python3.7/dist-packages/kombu/pidbox.py", line 115, in dispatch
ticket=ticket)
File "/usr/local/lib/python3.7/dist-packages/kombu/pidbox.py", line 151, in reply
serializer=self.mailbox.serializer)
File "/usr/local/lib/python3.7/dist-packages/kombu/pidbox.py", line 285, in _publish_reply
**opts
File "/usr/local/lib/python3.7/dist-packages/kombu/messaging.py", line 181, in publish
exchange_name, declare,
File "/usr/local/lib/python3.7/dist-packages/kombu/connection.py", line 551, in _ensured
errback and errback(exc, 0)
File "/usr/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.7/dist-packages/kombu/connection.py", line 444, in _reraise_as_library_errors
sys.exc_info()[2])
File "/usr/local/lib/python3.7/dist-packages/vine/five.py", line 194, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.7/dist-packages/kombu/connection.py", line 439, in _reraise_as_library_errors
yield
File "/usr/local/lib/python3.7/dist-packages/kombu/connection.py", line 518, in _ensured
return fun(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/kombu/messaging.py", line 203, in _publish
mandatory=mandatory, immediate=immediate,
File "/usr/local/lib/python3.7/dist-packages/kombu/transport/virtual/base.py", line 605, in basic_publish
message, exchange, routing_key, **kwargs
File "/usr/local/lib/python3.7/dist-packages/kombu/transport/virtual/exchange.py", line 70, in deliver
for queue in _lookup(exchange, routing_key):
File "/usr/local/lib/python3.7/dist-packages/kombu/transport/redis.py", line 877, in _lookup
exchange, redis_key))
kombu.exceptions.OperationalError:
Cannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists.
Probably the key ('_kombu.binding.reply.celery.pidbox') has been removed from the Redis database.

为什么?

根据这篇文章,kombu库目前存在问题,降级到4.6.4,对一些人来说,4.6.3解决了问题

jorijinnall commented 11 days ago
Had the same issue.
I fixed by downgrading kombu from 4.6.5 to 4.6.3
I still had the bug in version 4.6.4

链接github

您可以启动多个工人,如下所示:

$ celery -A proj worker -l info --concurrency=4 -n wkr1@hostname
$ celery -A proj worker -l info --concurrency=2 -n wkr2@hostname
$ celery -A proj worker -l info --concurrency=2 -n wkr3@hostname

在上面的例子中,有三个工人将能够产生4,2,2个子进程。通常建议每台机器运行一个工作进程,并发值将定义并行运行的进程数量。

这些进程的默认数量通常等于该机器上的核心数量。

我希望这能对你有所帮助。

相关内容

  • 没有找到相关文章

最新更新