请告诉我刮擦启动代码有什么问题



我试图抓取三星新闻编辑室墨西哥的内容(#recent_list_box> li(数据。 但是它不起作用,你能告诉我为什么吗?

https://news.samsung.com/mx

以为我用javascript带来了内容,但我看不懂我

版本: 刮擦 : 2.1.0 飞溅 : 3.4.1

蜘蛛代码

import scrapy
from scrapy_splash import SplashRequest
from scrapy import Request

class CrawlspiderSpider(scrapy.Spider):
name = 'crawlspider'
allowed_domains = ['news.samsung.com/mx']
page = 1
start_urls = ['https://news.samsung.com/mx']
def start_request(self):
for url in self.start_urls:
yield SplashRequest(
url,
self.main_parse,
endpoint='render.html',
args = {'wait': 10}
)
def parse(self, response):
lists = response.css('#recent_list_box > li').getAll()
for list in lists:
yield {"list" :lists.get() }

我们已经包含了所涉及的中间件。 设置代码

BOT_NAME = 'spider'
SPIDER_MODULES = ['spider.spiders']
NEWSPIDER_MODULE = 'spider.spiders'
LOG_FILE = 'log.txt'
AJAXCRAWL_ENABLED = True
ROBOTSTXT_OBEY = False
SPLASH_URL = 'http://127.0.0.1'
DOWNLOADER_MIDDLEWARES = {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
SPIDER_MIDDLEWARES = {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'
SPLASH_LOG_400 = True

以下是日志文件中的其余日志。 如果您能告诉我为什么留下下面的日志以及为什么我无法读取我想要的数据,我将不胜感激

2020-07-02 15:27:09 [scrapy.core.engine] INFO: Spider opened
2020-07-02 15:27:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-07-02 15:27:09 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024
2020-07-02 15:27:09 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://news.samsung.com/mx/> from <GET https://news.samsung.com/mx>
2020-07-02 15:27:09 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://news.samsung.com/mx/> (referer: None)
2020-07-02 15:27:09 [scrapy.core.scraper] ERROR: Spider error processing <GET https://news.samsung.com/mx/> (referer: None)
Traceback (most recent call last):
File "c:usersdoje1appdatalocalprogramspythonpython38libsite-packagesscrapyutilsdefer.py", line 117, in iter_errback
yield next(it)
File "c:usersdoje1appdatalocalprogramspythonpython38libsite-packagesscrapyutilspython.py", line 345, in __next__
return next(self.data)
File "c:usersdoje1appdatalocalprogramspythonpython38libsite-packagesscrapyutilspython.py", line 345, in __next__
return next(self.data)
File "c:usersdoje1appdatalocalprogramspythonpython38libsite-packagesscrapycorespidermw.py", line 64, in _evaluate_iterable
for r in iterable:
File "c:usersdoje1appdatalocalprogramspythonpython38libsite-packagesscrapy_splashmiddleware.py", line 156, in process_spider_output
for el in result:
File "c:usersdoje1appdatalocalprogramspythonpython38libsite-packagesscrapycorespidermw.py", line 64, in _evaluate_iterable
for r in iterable:
File "c:usersdoje1appdatalocalprogramspythonpython38libsite-packagesscrapyspidermiddlewaresoffsite.py", line 29, in process_spider_output
for x in result:
File "c:usersdoje1appdatalocalprogramspythonpython38libsite-packagesscrapycorespidermw.py", line 64, in _evaluate_iterable
for r in iterable:
File "c:usersdoje1appdatalocalprogramspythonpython38libsite-packagesscrapyspidermiddlewaresreferer.py", line 338, in <genexpr>
return (_set_referer(r) for r in result or ())
File "c:usersdoje1appdatalocalprogramspythonpython38libsite-packagesscrapycorespidermw.py", line 64, in _evaluate_iterable
for r in iterable:
File "c:usersdoje1appdatalocalprogramspythonpython38libsite-packagesscrapyspidermiddlewaresurllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "c:usersdoje1appdatalocalprogramspythonpython38libsite-packagesscrapycorespidermw.py", line 64, in _evaluate_iterable
for r in iterable:
File "c:usersdoje1appdatalocalprogramspythonpython38libsite-packagesscrapyspidermiddlewaresdepth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "c:usersdoje1appdatalocalprogramspythonpython38libsite-packagesscrapycorespidermw.py", line 64, in _evaluate_iterable
for r in iterable:
File "C:scrapy_tutorialspiderspiderspiderscrawlspider.py", line 22, in parse
lists = response.css('#recent_list_box > li').getAll()
AttributeError: 'SelectorList' object has no attribute 'getAll'
2020-07-02 15:27:09 [scrapy.core.engine] INFO: Closing spider (finished)
2020-07-02 15:27:09 [scrapy.statscollectors] INFO: Dumping Scrapy stats:

你必须改变

lists = response.css('#recent_list_box > li').getAll()

lists = response.css('#recent_list_box > li').getall()

小写字母"a">

最新更新