多蜘蛛在刮擦

  • 本文关键字: django scrapy celery
  • 更新时间 :
  • 英文 :


我用姜戈,芹菜,刮擦。

我的芹菜设置:

CELERY_BROKER_URL = 'amqp://****/myvhost'
CELERY_TIMEZONE = TIME_ZONE
CELERYD_CONCURRENCY = 1000
CELERYD_MAX_TASKS_PER_CHILD = 4
CELERY_IGNORE_RESULT = True
# django celery
CELERY_RESULT_BACKEND = 'django-db'
# celery queues setup
CELERY_DEFAULT_QUEUE = 'default'
CELERY_DEFAULT_ROUTING_KEY = 'default'
CELERY_QUEUES = (
Queue('get_context', Exchange('get_context'), routing_key='get_context'),
Queue('get_article', Exchange('get_article'), routing_key='get_article'),
)
CELERY_ROUTES = {
'parse.tasks.get_context': {
'queue': 'get_context',
'routing_key': 'get_context',
},
'parse.tasks.get_article': {
'queue': 'get_article',
'routing_key': 'get_article',
},
}

芹菜有两个任务:

from api_parser import celery_app
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from scrapy_parser.scrapy_parser.spiders.map_links import MapLinksSpider
from scrapy_parser.scrapy_parser.spiders.articles import ArticlesSpider
from threading import Thread

@celery_app.task
def get_context(rules_id, rules):
process = CrawlerProcess(get_project_settings())
process.crawl(MapLinksSpider, rules_id=rules_id, rules=rules)
Thread(target=process.start).start()

@celery_app.task
def get_article(rules_id, link_id, rules, link):
process = CrawlerProcess(get_project_settings())
process.crawl(ArticlesSpider, rules_id=rules_id, link_id=link_id, rules=rules, link=link)
Thread(target=process.start).start()

第一个任务由信号触发并映射链接。

向数据库添加新链接时,将启动第二个任务。

我在 django 中的信号:

from django.db.models.signals import post_save
from django.dispatch import receiver
from parse.models.rules import Scheduler, Rules, ParseLinks
from parse.tasks import get_context, get_article

@receiver(post_save, sender=Scheduler)
def create_task_get_context(sender, instance, created, **kwargs):
if created:
rules = Rules.objects.get(id=int(instance.rules.id))
get_context.delay(int(rules.id), str(rules.rules))

@receiver(post_save, sender=ParseLinks)
def create_task_get_article(sender, instance, created, **kwargs):
if created:
parse_link = ParseLinks.objects.get(id=int(instance.id))
get_article.delay(int(parse_link.rules.id), int(parse_link.id), str(parse_link.rules.rules), str(parse_link.link))

我的蜘蛛:

map_links.py

from parse.models.rules import ParseLinks
import scrapy
import json

class MapLinksSpider(scrapy.Spider):
name = "map_links"
start_urls = []
def __init__(self, **kw):
super(MapLinksSpider, self).__init__(**kw)
self.rules_id = kw.get('rules_id')
self.rules = json.loads(kw.get('rules'))
self.start_urls = [self.rules['url']]
self.templates = self.rules['item']['templates']
self.pagination = self.rules['pagination']
def parse(self, response):
for item in self.templates:
context = response.css(str(item['context']))
for row in context:
link = row.css('%s::attr(%s)' % (item['link']['cssSelector'], item['link']['attr'])).extract_first(),
title = row.css('%s::text' % item['options']['title']['cssSelector']).extract_first(),
date = row.css('%s::text' % item['options']['date']['cssSelector']).extract_first()
ParseLinks.objects.get_or_create(rules_id=self.rules_id, link=self.rules['url'] + link[0], title=title, date=date)
next_page = response.css('%s::attr(%s)' % (self.pagination['link']['cssSelector'], self.pagination['link']['attr'])).extract_first()
if next_page is not None:
next_page = response.urljoin(next_page)
yield scrapy.Request(next_page, callback=self.parse)

articles.py

from parse.models.rules import ParseData
import scrapy
import json

class ArticlesSpider(scrapy.Spider):
name = "articles"
start_urls = []
def __init__(self, **kw):
super(ArticlesSpider, self).__init__(**kw)
self.rules_id = kw.get('rules_id')
self.link_id = kw.get('link_id')
self.rules = json.loads(kw.get('rules'))
self.link = kw.get('link')
def parse(self, response):
self.start_urls = [self.link]
title = response.css('%s::text' % self.rules['article']['title']['cssSelector']).extract_first()
text = response.css('%s::text' % self.rules['article']['text']['cssSelector']).extract_first()
ParseData.objects.create(rules_id=self.rules_id, link_id=self.link_id, title=title, text=text)
yield {
"title": title,
'text': text
}

但是我得到错误:twisted.internet.error.ReactorNotRestartable

我知道该错误是由启动蜘蛛的新进程引起的。但我正在使用线程。我不明白为什么这不能解决我的问题。

我认为每个初学者都符合这个问题:)
试试这个:
0(pip install crochet


  1. import from crochet import setup

  2. setup()- 在文件
    顶部

  3. 删除 2 行:
    a(d.addBoth(lambda _: reactor.stop())
    b(reactor.run()

    [Scrapy docs][2] 中唯一有意义的行是我的代码中的最后 2 行:

    #some 更多进口 从钩针导入设置 设置((

    def run_spider(spiderName(: module_name="first_scrapy.蜘蛛。{}".format(spiderName( scrapy_var = import_module(module_name( #do 从蜘蛛模块动态导入选定的 spider
    spiderObj=scrapy_var.mySpider(( #get mySpider 对象 爬虫 = 爬虫跑动器(get_project_settings((( #from 刮擦文档 crawler.crawl(spiderObj( #from Scrapy docs

此代码允许选择要运行的蜘蛛,只需将其名称传递给函数run_spider并在抓取完成后 - 选择另一个蜘蛛并再次运行它。
接下来,您只需从芹菜任务运行run_spider。 [1]: 反应堆不可重启 - 扭曲和刮擦 [2]: https://doc.scrapy.org/en/latest/topics/practices.html

相关内容

  • 没有找到相关文章

最新更新