我想爬一个网站,但是当我爬下下一页时它不起作用,这是蜘蛛代码吗?错了,请告诉我,非常感谢。
import scrapy
from crawlAll.items import CrawlallItem
class ToutiaoEssayJokeSpider(scrapy.Spider):
name = "duanzi"
allowed_domains = ["http://duanziwang.com"]
start_urls = ['http://duanziwang.com/category/duanzi/page/1']
def parse(self, response):
for sel in response.xpath("//article[@class='excerpt excerpt-nothumbnail']"):
item = CrawlallItem()
item['Title'] = sel.xpath("//header/h2/a/text()").extract_first()
item['Text'] = sel.xpath("//p[@class='note']/text()").extract_first()
item['Views'] = sel.xpath("//p[1]/span[@class='muted'][2]/text()").extract_first()
item['Time'] = sel.xpath("//p[1]/span[@class='muted'][1]/text()").extract_first()
yield item
next_page = response.xpath("//ul/li[@class='next-page']/a/@href").extract_first()
if next_page is not None:
next_page = response.urljoin(next_page)
yield scrapy.Request(next_page, callback=self.parse)
我已经使用print(next_page)测试next_page值是否为true,这是真的,它为我提供了一个链接地址:http://duanziwang.com/category/category/duanzi/duanzi/page/2',那么我的代码怎么了?
您的 allowed_domains
参数是错误的。在这种情况下,它不应包含http,通常是最好只是将域与顶级域(即domain.com
如果您运行蜘蛛并观察日志,您会看到以下内容:
[scrapy] DEBUG: Filtered offsite request to 'duanziwang.com': <GET http://duanziwang.com/category/duanzi/page/2>
所以尝试:
allowed_domains = ["duanziwang.com"]