使用应用程序工厂模式将芹菜与烧瓶集成:最大递归深度错误



我正在使用CookieCutter烧瓶模板进行工作,该模板使用了应用程序工厂模式。我让芹菜从事没有使用应用程序上下文的任务工作,但我的任务之一确实需要知道。它可以进行数据库查询并更新数据库对象。现在,我没有循环导入错误(尽管我有其他尝试),而是最大递归深度错误。

我咨询了此博客文章,介绍了如何将芹菜与应用程序工厂模式一起使用,我正在尝试密切关注此堆栈溢出答案,因为它显然也来自CookieCutter烧瓶。

我项目结构的相关部分:

cookiecutter_mbam
│   celeryconfig.py   
│
└───cookiecutter_mbam
   |   __init__.py
   │   app.py
   │   run_celery.py
   │
   └───utility
   |       celery_utils.py
   |
   └───derivation 
   |       tasks.py  
   | 
   └───storage
   |       tasks.py    
   |
   └───xnat
          tasks.py

__init__.py

"""Main application package."""
from celery import Celery
celery = Celery('cookiecutter_mbam', config_source='cookiecutter_mbam.celeryconfig')

app.py的相关部分:

from cookiecutter_mbam import celery
def create_app(config_object='cookiecutter_mbam.settings'):
    """An application factory, as explained here: http://flask.pocoo.org/docs/patterns/appfactories/.
    :param config_object: The configuration object to use.
    """
    app = Flask(__name__.split('.')[0])
    app.config.from_object(config_object)
    init_celery(app, celery=celery)
    register_extensions(app)
    # ...
    return app

run_celery.py

from cookiecutter_mbam.app import create_app
from cookiecutter_mbam import celery
from cookiecutter_mbam.utility.celery_utils import init_celery
app = create_app(config_object='cookiecutter_mbam.settings')
init_celery(app, celery)

celeryconfig.py

broker_url = 'redis://localhost:6379'
result_backend = 'redis://localhost:6379'
task_serializer = 'json'
result_serializer = 'json'
accept_content = ['json']
enable_utc = True
imports = {'cookiecutter_mbam.xnat.tasks', 'cookiecutter_mbam.storage.tasks', 'cookiecutter_mbam.derivation.tasks'}

celery_utils.py的相关部分:

def init_celery(app, celery):
    """Add flask app context to celery.Task"""
    class ContextTask(celery.Task):
        def __call__(self, *args, **kwargs):
            with app.app_context():
                return self.run(*args, **kwargs)
    celery.Task = ContextTask
    return celery

当我尝试使用celery -A cookiecutter_mbam.run_celery:celery worker启动工人时,我会遇到RecursionError: maximum recursion depth exceeded while calling a Python object错误。(我还尝试了其他几种调用该工人的方法,这都是同样的错误。)这是堆栈跟踪的摘录:

Traceback (most recent call last):
  File "/Users/katie/anaconda/bin/celery", line 11, in <module>
    sys.exit(main())
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/__main__.py", line 16, in main
    _main()
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 322, in main
    cmd.execute_from_commandline(argv)
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 496, in execute_from_commandline
    super(CeleryCommand, self).execute_from_commandline(argv)))
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 275, in execute_from_commandline
    return self.handle_argv(self.prog_name, argv[1:])
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 488, in handle_argv
    return self.execute(command, argv)
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 420, in execute
    ).run_from_argv(self.prog_name, argv[1:], command=argv[0])
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/worker.py", line 221, in run_from_argv
    *self.parse_options(prog_name, argv, command))
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 398, in parse_options
    self.parser = self.create_parser(prog_name, command)
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 414, in create_parser
    self.add_arguments(parser)
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/worker.py", line 277, in add_arguments
    default=conf.worker_state_db,
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
    return self[k]
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 429, in __getitem__
    return getitem(k)
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 278, in __getitem__
    return mapping[_key]
  File "/Users/katie/anaconda/lib/python3.6/collections/__init__.py", line 989, in __getitem__
    if key in self.data:
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
    return self[k]
  File "/Users/katie/anaconda/lib/python3.6/collections/__init__.py", line 989, in __getitem__
    if key in self.data:
  File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
    return self[k]

我理解了这个错误的基本意义 - 某些事情自称为无限。也许create_app。但是我看不到为什么,而且我不知道该如何调试。

当我尝试加载网站时,我也会得到这个:

  File "~/cookiecutter_mbam/cookiecutter_mbam/xnat/tasks.py", line 14, in <module>
    @celery.task
AttributeError: module 'cookiecutter_mbam.celery' has no attribute 'task'

当我使用此处描述的make_celery方法时,我没有这个问题,但是当您需要任务访问应用程序上下文时,该方法会产生循环导入问题。关于如何使用CookieCutter烧瓶模板正确执行此操作的指针将不胜感激。

我怀疑使玻璃应用程序可用于芹菜的那一位代码。直接进入run(),它跳过了一些基本代码。(请参阅https://github.com/celery/celery/blob/master/celery/app/task/task.py#l387)

尝试调用继承的__call__。这是我(工作)应用之一的片段。

# Arrange for tasks to have access to the Flask app
TaskBase = celery.Task
class ContextTask(TaskBase):
    def __call__(self, *args, **kwargs):
        with app.app_context():
            return TaskBase.__call__(self, *args, **kwargs)  ## << here
celery.Task = ContextTask

我也看不到您在哪里创建Celery实例并配置它。我认为你有

celery = Celery(__name__)

,然后需要

celery.config_from_object(...)

init_celery()

中的某个地方

这是解决的。我的configcelery.py在错误的位置。我需要将其移至软件包目录,而不是父寄存器目录。放错了位置的配置文件,而不是导致"我找不到该文件"的错误,这是非常不直觉/非信息的,而是导致无限递归。但是至少我终于看到了它并进行了更正。

相关内容

  • 没有找到相关文章

最新更新