Scrapy部署停止工作



我正在尝试使用scrapyd部署scrapy项目,但它给我错误…

sudo scrapy deploy default -p eScraper
Building egg of eScraper-1371463750
'build/scripts-2.7' does not exist -- can't clean it
zip_safe flag not set; analyzing archive contents...
eScraperInterface.settings: module references __file__
eScraper.settings: module references __file__
Deploying eScraper-1371463750 to http://localhost:6800/addversion.json
Server response (200):
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/scrapyd/webservice.py", line 18, in render
    return JsonResource.render(self, txrequest)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/txweb.py", line 10, in render
    r = resource.Resource.render(self, txrequest)
  File "/usr/local/lib/python2.7/dist-packages/twisted/web/resource.py", line 250, in render
    return m(request)
  File "/usr/local/lib/python2.7/dist-packages/scrapyd/webservice.py", line 66, in render_POST
    spiders = get_spider_list(project)
  File "/usr/local/lib/python2.7/dist-packages/scrapyd/utils.py", line 65, in get_spider_list
    raise RuntimeError(msg.splitlines()[-1])
RuntimeError: OSError: [Errno 20] Not a directory: '/tmp/eScraper-1371463750-Lm8HLh.egg/images'

早些时候我能够正确地部署项目,但现在不能.....但是如果使用爬行蜘蛛使用scrapy爬行蜘蛛名称,那么就没有问题了…谁能帮帮我....

试试下面两件事:1. 可能是您部署了太多版本,请尝试删除一些旧版本2. 在部署之前,删除构建文件夹和安装文件

就运行爬虫而言,如果你运行任何你没有部署的任意名称的爬虫,scrapyd将返回'OK'响应以及作业id。

最新更新