Celery 5.0.1,并且仅使用ForkPoolWorker-31



我发现奇怪的是,我的芹菜工人只记录ForkPoolWorker-31,就像它只用一个处理器处理一样。

即使运行top,它也显示只有一个处理器非常繁忙,而其他处理器则没有那么繁忙。

我用芹菜celery -A my_service.celery_tasks:celery_app worker --loglevel=INFO -n ${CELERY_INSTANCE} -E

[2020-11-07 00:16:32,677: INFO/MainProcess] celery@grid12 ready.
[2020-11-07 00:16:36,416: WARNING/ForkPoolWorker-31] 19889
[2020-11-07 00:16:36,427: WARNING/ForkPoolWorker-31] 19934
[2020-11-07 00:16:36,427: WARNING/ForkPoolWorker-31] 19882
[2020-11-07 00:16:36,432: WARNING/ForkPoolWorker-31] 20282
[2020-11-07 00:16:36,441: WARNING/ForkPoolWorker-31] 20031
[2020-11-07 00:16:36,446: WARNING/ForkPoolWorker-31] 19884
[2020-11-07 00:16:36,452: WARNING/ForkPoolWorker-31] 20124
[2020-11-07 00:16:36,456: WARNING/ForkPoolWorker-31] 20030
[2020-11-07 00:17:53,313: WARNING/ForkPoolWorker-31] 19897
[2020-11-07 00:17:53,446: INFO/ForkPoolWorker-31] POST Some logs... [status:200 request:11.930s]
[2020-11-07 00:17:54,099: INFO/ForkPoolWorker-31] Some logs...
[2020-11-07 00:17:55,771: INFO/ForkPoolWorker-31] POST Some logs... [status:200 request:15.501s]
[2020-11-07 00:17:56,307: INFO/ForkPoolWorker-31] 
-------------- celery@XXXXX v5.0.1 (singularity)
--- ***** ----- 
-- ******* ---- Linux-4.14.13-1.el7.elrepo.x86_64-x86_64-with-glibc2.10 2020-11-07 00:22:33
- *** --- * --- 
- ** ---------- [config]
- ** ---------- .> app:         my_service.celery_tasks:0x7fffed3beaf0
- ** ---------- .> transport:   redis://:**@grid12:6385/0
- ** ---------- .> results:     redis://:**@grid12:6385/0
- *** --- * --- .> concurrency: 48 (prefork)
-- ******* ---- .> task events: ON
--- ***** ----- 
-------------- [queues]
.> celery           exchange=celery(direct) key=celery

[tasks]
. myTask
[2020-11-07 00:22:34,002: INFO/MainProcess] Connected to redis://:**@grid12:6385/0
[2020-11-07 00:22:34,041: INFO/MainProcess] mingle: searching for neighbors
[2020-11-07 00:22:35,942: INFO/MainProcess] mingle: sync with 30 nodes
[2020-11-07 00:22:36,164: INFO/MainProcess] mingle: sync complete
[2020-11-07 00:22:37,733: INFO/MainProcess] pidbox: Connected to redis://:**@grid12:6385/0.

该机器具有48个核心,并且平均使用量<2%。

有很多悬而未决的任务。有什么建议吗?

最近我遇到了同样的问题,并通过添加标志-O fair来运行Celery命令来解决它。

我的整个命令如下:

# "-O fair" is a key component for simultaneous task execution by prefork workers
# celery app is module name in my program with Celery instance inside
# cel_app_worker is the name of Celery worker 
# -P prefork - is not necessary since it is default value, but I decided to keep it
celery -A celery_app worker --loglevel=INFO --concurrency=8 -O fair -P prefork -n cel_app_worker

请试一下,让我知道它对你是否有效。

我在Docker中使用Celery应用程序,Dockerfile:

FROM python:3.7-alpine
WORKDIR /usr/src/app
RUN apk add --no-cache tzdata
ENV TZ=Europe/Moscow
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
# Create a group and user
RUN addgroup -S appgroup && adduser -S celery_user -G appgroup
# Tell docker that all future commands should run as the appuser user
USER celery_user
# !! "-O fair" is a key component for simultaneous task execution by on worker !!
CMD celery -A celery_app worker --loglevel=INFO --concurrency=8 -O fair -P prefork -n cel_app_worker

我最近遇到了这个错误。发现一个芹菜任务陷入了(递归(无限(不确定(for循环。必须终止芹菜工作程序并修复无限(不确定("for"循环问题,然后启动工作程序。错误消失了。

相关内容

  • 没有找到相关文章

最新更新