Scrapy:0 页已抓取(适用于 scrapy shell,但不适用于 scrapy 爬行蜘蛛命令)



我在刮擦方面遇到了一些问题。它不会返回任何结果。我试图将以下蜘蛛复制并粘贴到刮擦的外壳中,它确实有效。真的不确定问题是什么,但是当我使用"刮擦爬行rxomega"运行它时,它不起作用。

from scrapy.selector import Selector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from iherb.items import IherbItem
class RxomegaSpider(CrawlSpider):
    name = 'rxomega'
    allowed_domains = ['http://www.iherb.com/']
    start_urls = ['http://www.iherb.com/product-reviews/Natural-Factors-RxOmega-3-Factors-EPA-400-mg-DHA-200-mg-240-Softgels/4251/',
            'http://www.iherb.com/product-reviews/Now-Foods-Omega-3-Cardiovascular-Support-200-Softgels/323/']
    #rules = (
    #    Rule(SgmlLinkExtractor(allow=r'Items/'), callback='parse_item', follow=True),
    #)
    def parse_item(self, response):
        print('hello')
        sel = Selector(response)
        sites = sel.xpath('//*[@id="mainContent"]/div[3]/div[2]/div')
        items = []
        for site in sites:
            i = IherbItem()
            i['review'] = site.xpath('div[5]/p/text()').extract()
            items.append(i)
        return items

我看到的消息是...刮擦爬行rxomega

2014-02-16 17:00:55-0800 [scrapy] INFO: Scrapy 0.22.0 started (bot: iherb)
2014-02-16 17:00:55-0800 [scrapy] INFO: Optional features available: ssl, http11, django
2014-02-16 17:00:55-0800 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'iherb.spiders', 'SPIDER_MODULES': ['iherb.spiders'], 'BOT_NAME': 'iherb'}
2014-02-16 17:00:55-0800 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-02-16 17:00:55-0800 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-02-16 17:00:55-0800 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-02-16 17:00:55-0800 [scrapy] INFO: Enabled item pipelines:
2014-02-16 17:00:55-0800 [rxomega] INFO: Spider opened
2014-02-16 17:00:55-0800 [rxomega] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-02-16 17:00:55-0800 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6026
2014-02-16 17:00:55-0800 [scrapy] DEBUG: Web service listening on 0.0.0.0:6083
2014-02-16 17:00:55-0800 [rxomega] DEBUG: Crawled (200) <GET http://www.iherb.com/product-reviews/Natural-Factors-RxOmega-3-Factors-EPA-400-mg-DHA-200-mg-240-Softgels/4251/> (referer: None)
2014-02-16 17:00:56-0800 [rxomega] DEBUG: Crawled (200) <GET http://www.iherb.com/product-reviews/Now-Foods-Omega-3-Cardiovascular-Support-200-Softgels/323/> (referer: None)
2014-02-16 17:00:56-0800 [rxomega] INFO: Closing spider (finished)
2014-02-16 17:00:56-0800 [rxomega] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 588,
     'downloader/request_count': 2,
     'downloader/request_method_count/GET': 2,
     'downloader/response_bytes': 37790,
     'downloader/response_count': 2,
     'downloader/response_status_count/200': 2,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2014, 2, 17, 1, 0, 56, 22065),
     'log_count/DEBUG': 4,
     'log_count/INFO': 7,
     'response_received_count': 2,
     'scheduler/dequeued': 2,
     'scheduler/dequeued/memory': 2,
     'scheduler/enqueued': 2,
     'scheduler/enqueued/memory': 2,
     'start_time': datetime.datetime(2014, 2, 17, 1, 0, 55, 256404)}
2014-02-16 17:00:56-0800 [rxomega] INFO: Spider closed (finished)

genspider功能创建一个CrawlSpider,并parse_item,但本教程使用Spider和解析。两者都是 0.22 版本。更改为蜘蛛并解析上面的代码,它可以工作。

如果要使用CrawlerSpider从该网站抓取项目页面,则必须更改以下两个:

  1. allowed_domains = ['www.iherb.com']排除 http://前缀
  2. rules = ( Rule(SgmlLinkExtractor(allow=r'Items'), callback='parse_item', follow=True), )

公开规则并删除帖子/签名

将类RxomegaSpider中的参数CrawlSpider更改为scrapy.Spider,并将函数名称从 parse_item 更改为 parse 。希望它有帮助

我认为你应该像这样使用蜘蛛和解析:

def parse(self, response):
    sel = Selector(response)
    sites = sel.xpath('//*[@id="mainContent"]/div[3]/div[2]/div')
    items = []
    for site in sites:
        i = IherbItem()
        i['review'] = site.xpath('div[5]/p/text()').extract()
        items.append(i)
    return items

尝试使用 start_requests 方法。在这种情况下,它总是解决了我的问题:):

def start_requests(self):
    urls = [
        'http://www.iherb.com/product-reviews/Natural-Factors-RxOmega-3-Factors-EPA-400-mg-DHA-200-mg-240-Softgels/4251/',
        'http://www.iherb.com/product-reviews/Now-Foods-Omega-3-Cardiovascular-Support-200-Softgels/323/',
    ]
    for url in urls:
        yield scrapy.Request(url=url, callback=self.parse)
def parse_item(self, response):
    print('hello')
    sel = Selector(response)
    sites = sel.xpath('//*[@id="mainContent"]/div[3]/div[2]/div')
    items = []
    for site in sites:
        i = IherbItem()
        i['review'] = site.xpath('div[5]/p/text()').extract()
        items.append(i)
    return items

删除 allowed_domains和使用def start_requests(self):

最新更新