我一直在努力让芹菜在一个烧瓶项目中工作。 flask 项目使用的是应用程序工厂模式,这导致了应用上下文和循环导入尝试使任务正常工作的问题。
我为设置找到了这个答案并运行了芹菜,它可以看到并注册我的任务,这些任务可以被调用并确实显示在消息队列中,通过大量工作,我什至可以让它们记录失败(到目前为止已撤销 - 仅(在 redis 结果后端。
任务本身没有错误,我没有失败。 我可以发送带有错误数据的任务并得到错误代码。
芹菜应用程序设置为使用 Rabbitmq 作为代理,使用 redis 作为结果后端。 这两种方法似乎都在起作用。 我可以登录 Rabbitmq 并查看进入队列的消息,并查看连接到队列的工人。 我可以看到一些结果最终会变成 redis
我不确定要包含哪些代码对此问题有帮助。 我确实有一些我认为是问题的日志记录输出,但不知道如何进行故障排除。 来自辅助角色调试日志的演示
-------------- celery@sy-devsophia v4.3.0 (rhubarb)
---- **** -----
--- * *** * -- Linux-3.10.0-1062.el7.x86_64-x86_64-with-redhat-7.6-Maipo 2019-10-14 18:13:20
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: janus:0x7f0f2b715a58
- ** ---------- .> transport: amqp://janus:**@localhost:5672/janus
- ** ---------- .> results: redis://localhost:6379/0
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ---- .> task events: ON
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. celery.accumulate
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. janus.workers.imports.course_import
. janus.workers.reports.run_published_report
[2019-10-14 18:13:20,356: DEBUG/MainProcess] | Worker: Starting Hub
[2019-10-14 18:13:20,356: DEBUG/MainProcess] ^-- substep ok
[2019-10-14 18:13:20,356: DEBUG/MainProcess] | Worker: Starting Pool
[2019-10-14 18:13:20,442: DEBUG/MainProcess] ^-- substep ok
[2019-10-14 18:13:20,443: DEBUG/MainProcess] | Worker: Starting Consumer
[2019-10-14 18:13:20,444: DEBUG/MainProcess] | Consumer: Starting Connection
[2019-10-14 18:13:20,501: INFO/MainProcess] Connected to amqp://janus:**@localhost:5672/janus
[2019-10-14 18:13:20,501: DEBUG/MainProcess] ^-- substep ok
[2019-10-14 18:13:20,501: DEBUG/MainProcess] | Consumer: Starting Events
[2019-10-14 18:13:20,547: DEBUG/MainProcess] ^-- substep ok
[2019-10-14 18:13:20,547: DEBUG/MainProcess] | Consumer: Starting Mingle
[2019-10-14 18:13:20,547: INFO/MainProcess] mingle: searching for neighbors
[2019-10-14 18:13:21,608: INFO/MainProcess] mingle: all alone
[2019-10-14 18:13:21,608: DEBUG/MainProcess] ^-- substep ok
[2019-10-14 18:13:21,608: DEBUG/MainProcess] | Consumer: Starting Tasks
[2019-10-14 18:13:21,615: DEBUG/MainProcess] ^-- substep ok
[2019-10-14 18:13:21,615: DEBUG/MainProcess] | Consumer: Starting Control
[2019-10-14 18:13:21,619: DEBUG/MainProcess] ^-- substep ok
[2019-10-14 18:13:21,619: DEBUG/MainProcess] | Consumer: Starting Gossip
[2019-10-14 18:13:21,624: DEBUG/MainProcess] ^-- substep ok
[2019-10-14 18:13:21,624: DEBUG/MainProcess] | Consumer: Starting Heart
[2019-10-14 18:13:21,626: DEBUG/MainProcess] ^-- substep ok
[2019-10-14 18:13:21,626: DEBUG/MainProcess] | Consumer: Starting event loop
[2019-10-14 18:13:21,626: DEBUG/MainProcess] | Worker: Hub.register Pool...
[2019-10-14 18:13:21,629: INFO/MainProcess] celery@sy-devsophia ready.
[2019-10-14 18:13:51,174: INFO/MainProcess] Received task: janus.workers.reports.run_published_report[fba8f1e0-be99-4800-a9df-0f564383647a]
[2019-10-14 18:13:51,175: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x7f0f28197840> (args:('janus.workers.reports.run_published_report', 'fba8f1e0-be99-4800-a9df-0f564383647a', {'lang': 'py', 'task': 'janus.workers.reports.run_published_report', 'id': 'fba8f1e0-be99-4800-a9df-0f564383647a', 'shadow': None, 'eta': None, 'expires': None, 'group': None, 'retries': 0, 'timelimit': [None, None], 'root_id': 'fba8f1e0-be99-4800-a9df-0f564383647a', 'parent_id': None, 'argsrepr': "('6201',)", 'kwargsrepr': '{}', 'origin': 'gen13297@sy-devsophia', 'reply_to': '9cd089a7-a28c-35a8-ab34-10440a35f5e2', 'correlation_id': 'fba8f1e0-be99-4800-a9df-0f564383647a', 'delivery_info': {'exchange': '', 'routing_key': 'celery', 'priority': 0, 'redelivered': False}}, <memory at 0x7f0f2333f1c8>, 'application/json', 'utf-8') kwargs:{})
[2019-10-14 18:13:51,177: DEBUG/MainProcess] | Worker: Closing Hub...
[2019-10-14 18:13:51,177: DEBUG/MainProcess] | Worker: Closing Pool...
[2019-10-14 18:13:51,177: DEBUG/MainProcess] | Worker: Closing Consumer...
[2019-10-14 18:13:51,178: DEBUG/MainProcess] | Worker: Stopping Consumer...
[2019-10-14 18:13:51,178: DEBUG/MainProcess] | Consumer: Closing Connection...
[2019-10-14 18:13:51,178: DEBUG/MainProcess] | Consumer: Closing Events...
[2019-10-14 18:13:51,178: DEBUG/MainProcess] | Consumer: Closing Mingle...
[2019-10-14 18:13:51,178: DEBUG/MainProcess] | Consumer: Closing Tasks...
[2019-10-14 18:13:51,178: DEBUG/MainProcess] | Consumer: Closing Control...
[2019-10-14 18:13:51,178: DEBUG/MainProcess] | Consumer: Closing Gossip...
[2019-10-14 18:13:51,178: DEBUG/MainProcess] | Consumer: Closing Heart...
[2019-10-14 18:13:51,179: DEBUG/MainProcess] | Consumer: Closing event loop...
[2019-10-14 18:13:51,179: DEBUG/MainProcess] | Consumer: Stopping event loop...
[2019-10-14 18:13:51,179: DEBUG/MainProcess] | Consumer: Stopping Heart...
[2019-10-14 18:13:51,179: DEBUG/MainProcess] | Consumer: Stopping Gossip...
[2019-10-14 18:13:51,186: DEBUG/MainProcess] | Consumer: Stopping Control...
[2019-10-14 18:13:51,188: DEBUG/MainProcess] | Consumer: Stopping Tasks...
[2019-10-14 18:13:51,188: DEBUG/MainProcess] Canceling task consumer...
[2019-10-14 18:13:51,188: DEBUG/MainProcess] | Consumer: Stopping Mingle...
[2019-10-14 18:13:51,189: DEBUG/MainProcess] | Consumer: Stopping Events...
[2019-10-14 18:13:51,189: DEBUG/MainProcess] | Consumer: Stopping Connection...
[2019-10-14 18:13:51,189: DEBUG/MainProcess] | Worker: Stopping Pool...
[2019-10-14 18:13:52,212: DEBUG/MainProcess] result handler: all workers terminated
[2019-10-14 18:13:52,212: DEBUG/MainProcess] | Worker: Stopping Hub...
[2019-10-14 18:13:52,212: DEBUG/MainProcess] | Consumer: Shutdown Heart...
然后在兔子日志中
=INFO REPORT==== 14-Oct-2019::18:21:05 ===
closing AMQP connection <0.15130.5> ([::1]:57688 -> [::1]:5672)
=WARNING REPORT==== 14-Oct-2019::18:21:05 ===
closing AMQP connection <0.15146.5> ([::1]:57690 -> [::1]:5672):
connection_closed_abruptly
由于我可以直接运行任务而不会出错,并且队列中出现的消息会显示在 worker 中,如果我取消任务,它确实会在 redis 后端中报告,因此除了此消息传递断开连接之外,我看不到错误在哪里,但我除了 rabbitmq 报告客户端关闭连接之外什么也得不到, 没有理由说明为什么工人在接到任务时会
死去。我假设我的芹菜设置在某处有问题,但上面链接的答案是我设法让芹菜使用此应用程序的最接近的答案。
任何帮助指出我在哪里追踪这个问题,我将不胜感激。 如果有任何设置或代码可以帮助看到,我可以分享它。 我不确定此时代码中有什么帮助,没有任何更多的错误消息可以参考。
似乎弄清楚出了什么问题。 至少我的芹菜设置现在可以工作了。
我使用 librabbitmq 进行 amqp 传输并更改为 pyamqp。 一旦我更改了传输库,它就开始工作了。