每次我的硬币容器重新启动时,我都会看到更多的文件卷



我正在使用 docker-compose 挂载多个映像,并希望创建 celery worker,但每次容器重新启动时,我都会在其中看到更多文件。我不知道为什么会发生这种情况。 我的目标是构建一个映像,并通过卷使用容器中的当前文件。所以我的本地文件在容器中是相同的。

# Console output
worker_1        | Readme.md
worker_1        | __init__.py
worker_1        | __pycache__
worker_1        | api.py
worker_1        | custom_logger.py
worker_1        | database_operator.py
worker_1        | secret_token.txt
worker_1        | tasks.py
worker_1        | tweet_operator.py
twitteranalyzer_worker_1 exited with code 0
worker_1        | Readme.md
worker_1        | __init__.py
worker_1        | __pycache__
worker_1        | api.py
worker_1        | custom_logger.py
worker_1        | database_operator.py
worker_1        | secret_token.txt
worker_1        | tasks.py
worker_1        | tweet_operator.py
worker_1        | Readme.md
worker_1        | __init__.py
worker_1        | __pycache__
worker_1        | api.py
worker_1        | custom_logger.py
worker_1        | database_operator.py
worker_1        | secret_token.txt
worker_1        | tasks.py
worker_1        | tweet_operator.py
worker_1        | Readme.md
worker_1        | __init__.py
worker_1        | __pycache__
worker_1        | api.py
worker_1        | custom_logger.py
worker_1        | database_operator.py
worker_1        | secret_token.txt
worker_1        | tasks.py
worker_1        | tweet_operator.py
twitteranalyzer_worker_1 exited with code 0
worker_1        | Readme.md
worker_1        | __init__.py
worker_1        | __pycache__
worker_1        | api.py
worker_1        | custom_logger.py
worker_1        | database_operator.py
worker_1        | secret_token.txt
worker_1        | tasks.py
worker_1        | tweet_operator.py
worker_1        | Readme.md
worker_1        | __init__.py
worker_1        | __pycache__
worker_1        | api.py
worker_1        | custom_logger.py
worker_1        | database_operator.py
worker_1        | secret_token.txt
worker_1        | tasks.py
worker_1        | tweet_operator.py
worker_1        | Readme.md
worker_1        | __init__.py
worker_1        | __pycache__
worker_1        | api.py
worker_1        | custom_logger.py
worker_1        | database_operator.py
worker_1        | secret_token.txt
worker_1        | tasks.py
worker_1        | tweet_operator.py
worker_1        | Readme.md
worker_1        | __init__.py
worker_1        | __pycache__
worker_1        | api.py
worker_1        | custom_logger.py
worker_1        | database_operator.py
worker_1        | secret_token.txt
worker_1        | tasks.py
worker_1        | tweet_operator.py
worker_1        | Readme.md
worker_1        | __init__.py
worker_1        | __pycache__
worker_1        | api.py
worker_1        | custom_logger.py
worker_1        | database_operator.py
worker_1        | secret_token.txt
worker_1        | tasks.py
worker_1        | tweet_operator.py
twitteranalyzer_worker_1 exited with code 0

我只是在 bash 中运行简单的 ls 以查看它发生的原因,但我不知道,或者我什至应该担心它。 DockerFileCeleryWorker只是安装 Python 依赖项。

# docker-compose.yaml
...
worker:
build:
context: .
dockerfile: CeleryWorker
image: celeryworker:1.0
volumes:
- ./twitter_analyzer/analyzer:/app/
links:
- broker
depends_on:
- broker
restart: always
command: bash -c "ls"

Docker 的一般期望是容器将包含一个长时间运行的进程,如 Web 服务器。restart:选项告诉它在进程退出时要执行的操作。

在您的情况下,您没有运行长时间运行的进程;您正在调用一个 shell 来运行/bin/ls。 退出后,restart: always生效,并再次运行容器命令。 这将立即退出,当它退出时,restart:always生效......

您可能根本不需要docker-compose.yml中的command:,特别是如果您有用于工作进程的专用 Dockerfile。 如果你确实有command:,它应该启动芹菜工作器。 如果要使用短期命令覆盖它,请不要设置重启策略。 (另请记住,您可以使用docker-compose run运行一次性备用命令,例如用于调试 shell。

worker:
build:
context: .
dockerfile: CeleryWorker
# (Sets the image name `docker-compose build` builds.)
image: celeryworker:1.0
# Do not need volumes: since the code is in the image
# Do not need links: on modern Docker at all
depends_on:
- broker
restart: always
# Do not need command: since CMD is in the Dockerfile

最新更新