我正在在线学习有关从命令提示符运行简单爬网程序的教程。当我开始运行爬虫时,我得到(我相信是)一堆与路径相关的错误。我的路径已经建立起来,当我打开命令提示符并编写"Python"时,一切都很好。
这是我使用Scrapy的爬虫的Python代码:
import scrapy
class QuotesSpider(scrapy.Spider):
name = 'quotes'
allowed_domains = ['quotes.toscrape.com']
start_urls = ['quotes.toscrape.com']
def parse(self, response):
h1_tag = response.xpath('//h1/a/text()').extract_first()
tags = response.xpath('//*[@class="tag-item"]/a/text()').extract()
yield {'H1 tag': h1_tag, 'Tags': tags}
当我在 PyCharm 中运行时,它运行没有任何错误并以代码 0 结尾。
这是我在命令提示符下用来抓取 C:\Users\Kev\Desktop\quotes_spider>scrapy 爬网引号的行。
以下是我从命令 propmt 中得到的错误:
2019-06-21 08:34:10 [scrapy.core.engine] INFO: Spider opened
2019-06-21 08:34:10 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at
0 pages/min), scraped 0 items (at 0 items/min)
2019-06-21 08:34:10 [scrapy.extensions.telnet] INFO: Telnet console
listening on 127.0.0.1:6023
2019-06-21 08:34:10 [scrapy.core.engine] ERROR: Error while obtaining
start requests
Traceback (most recent call last):
File "c:userskevappdatalocalprogramspythonpython37-32libsite-
packagesscrapycoreengine.py", line 127, in _next_request
request = next(slot.start_requests)
File "c:userskevappdatalocalprogramspythonpython37-32libsite-
packagesscrapyspiders__init__.py", line 83, in start_requests
yield Request(url, dont_filter=True)
File "c:userskevappdatalocalprogramspythonpython37-32libsite-
packagesscrapyhttprequest__init__.py", line 25, in __init__
self._set_url(url)
File "c:userskevappdatalocalprogramspythonpython37-32libsite-
packagesscrapyhttprequest__init__.py", line 62, in _set_url
raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: quotes.toscrape.com
2019-06-21 08:34:10 [scrapy.core.engine] INFO: Closing spider (finished)
2019-06-21 08:34:10 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'finish_reason': 'finished',
'finish_time': datetime.datetime(2019, 6, 21, 12, 34, 10, 194671),
'log_count/ERROR': 1,
'log_count/INFO': 9,
'start_time': datetime.datetime(2019, 6, 21, 12, 34, 10, 185685)}
2019-06-21 08:34:10 [scrapy.core.engine] INFO: Spider closed (finished)
我基本上期望得到一个"已爬网(200)
由于错误消息,我假设这与路径有某种关系,但我不明白这怎么可能,因为我的路径已经建立得很好。
任何帮助将不胜感激,谢谢!
从以下位置编辑您的start_urls
:
start_urls = ['quotes.toscrape.com']
自:
start_urls = ['http://www.quotes.toscrape.com']
它应该从http(s)://
开始。只有allowed_domains
不应该有http(s)://
.