Scrapy 忽略响应 303 - 未处理或不允许 HTTP 状态代码



我想从 https://m.youtube.com 中抓取评论

当我试图抓取 https://m.youtube.com 时,首先它会将我重定向到 https://www.youtube.com。我已经编程我的蜘蛛不服从机器人.txt,禁用cookie,尝试meta=dont_redirect。现在它没有将我重定向到 https://www.youtube.com 但现在我得到响应"忽略响应<303 https://m.youtube.com/view_comment?v=xHkL9PU7o9k&gl=US&hl=en&client=mv-google>:未处理或不允许 HTTP 状态代码" 我该如何解决这个问题。

我的蜘蛛代码如下:

    import scrapy
    class CommentsSpider(scrapy.Spider):
        name = 'comments'
        allowed_domains = ['m.youtube.com']
        start_urls = [
        'https://m.youtube.com/view_comment? 
        v=xHkL9PU7o9k&gl=US&hl=en&client=mvgoogle'
        ]

def start_requests(self):
    for url in self.start_urls:
        yield scrapy.Request(url, meta = {'dont_redirect': True})
def parse(self, response):
    x = response.xpath('/html/body/div[4]/div[2]/text()').extract()
    y = 
       response.xpath('/html/body/div[4]/div[3]/div[2]/text()').extract()
    yield{'Comments': (x, y)}
'

''

输出:

2019-07-18 16:07:23 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2019-07-18 16:07:24 [scrapy.core.engine] DEBUG: Crawled (303) <GET https://m.youtube.com/view_comment?v=xHkL9PU7o9k&gl=US&hl=en&client=mv-google> (referer: None)
2019-07-18 16:07:24 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <303 https://m.youtube.com/view_comment?v=xHkL9PU7o9k&gl=US&hl=en&client=mv-google>: HTTP status code is not handled or not allowed
2019-07-18 16:07:24 [scrapy.core.engine] INFO: Closing spider (finished)

根据 Scrapy 文档,您可以使用 handle_httpstatus_list spider 属性。

在您的情况下:

class CommentsSpider(scrapy.Spider):
    name = 'comments'
    allowed_domains = ['m.youtube.com']
    start_urls = [
        'https://m.youtube.com/view_commentv=xHkL9PU7o9k&gl=US&hl=en&client=mvgoogle'
    ]
    handle_httpstatus_list = [303]

我会尝试使用移动浏览器的用户代理字符串来避免被重定向:

USER_AGENT='Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_1 like Mac OS X) AppleWebKit/603.1.30 (KHTML, like Gecko) Version/10.0 Mobile/14E304 Safari/602.1'
headers = {'User-Agent': USER_AGENT}
def start_requests(self):
    for url in self.start_urls:
        yield scrapy.Request(url, headers=self.headers)

最新更新