Python scrapy crawlspider x-forwarded-for header



我的简单爬行蜘蛛如下。如何在这个爬虫中添加x - forward - for ?x - forward - for应该适用于所有将被抓取的页面。

from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.http.request import Request
class MySpider(CrawlSpider):
name = 'spidy'
allowed_domains = ['website.com', 'www.website.com']
start_urls = ['http://www.website.com/']
rules = (
Rule(LinkExtractor(allow=('/uk/', )), callback='parse_item', follow=True),
)
def parse_item(self, response):
print(response.url)

注:我找到了一种方法,通过settings.py,但有一种方法,通过蜘蛛?谢谢你!

您可以通过在Rule对象中使用process_request函数来实现这一点,如下所示

rules = (Rule(LinkExtractor(allow=('/uk/', )), callback='parse_item', follow=True, process_request='add_header'),)
def add_header(self, request, response):
request.headers['X-Forwarded-For'] = 'the_header_value'
return request

查看文档获取更多信息。

最新更新