我正在学习刮擦,它测试没有错误,但没有抓取任何东西



我的蜘蛛代码:

import scrapy
import re
class StocksSpider(scrapy.Spider):
name = 'stocks'
start_urls = ['http://quote.cfi.cn/stockList.aspx']
def parse(self, response):
for href in response.css('a::attr(href)').extract():
try:
stock = re.findall(r'd{6}',href)[0]
url = 'http://quote.cfi.cn/' + stock + '.html'
yield scrapy.Request(url,callback = self.parse_stock)
except:
continue
def parse_back(self,response):
infoDict = {}
stockInfo = response.css(".Lfont::text").extract()
stock_name = stockInfo.split('(')[0]
stock_num = stockInfo.split('(')[1].strip(')')
infoDict[stock_name] = stock_num
yield infoDict

pipelines


class StocksReplitePipeline(object):
def process_item(self, item, spider):
return item
class StocksInfoPipeline(object):
def open_spider(self,spider):
self.f = open('/Users/cuizhipeng/Desktop/scrapy/stocksinfo.txt','w')
def close_spider(self,spider):
self.f.close()
def process_item(self,item,spider):
try:
line = str(dict(item)) + 'n'
self.f.write(line)
except:
pass

running text

2020-02-26 14:56:01 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: stocks_replite)
2020-02-26 14:56:01 [scrapy.utils.log] INFO: Versions: lxml 4.4.1.0, libxml2 2.9.9, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.7.4 (default, Aug 13 2019, 15:17:50) - [Clang 4.0.1 (tags/RELEASE_401/final)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1d  10 Sep 2019), cryptography 2.7, Platform Darwin-18.7.0-x86_64-i386-64bit
2020-02-26 14:56:01 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'stocks_replite', 'NEWSPIDER_MODULE': 'stocks_replite.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['stocks_replite.spiders']}
2020-02-26 14:56:01 [scrapy.extensions.telnet] INFO: Telnet Password: ed7b516539cfc1c1
2020-02-26 14:56:01 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2020-02-26 14:56:02 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-02-26 14:56:02 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-02-26 14:56:02 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-02-26 14:56:02 [scrapy.core.engine] INFO: Spider opened
2020-02-26 14:56:02 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-02-26 14:56:02 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-02-26 14:56:02 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://quote.cfi.cn/robots.txt> (referer: None)
2020-02-26 14:56:02 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quote.cfi.cn/stockList.aspx> (referer: None)
2020-02-26 14:56:02 [scrapy.core.engine] INFO: Closing spider (finished)
2020-02-26 14:56:02 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 446,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 95628,
'downloader/response_count': 2,
'downloader/response_status_count/200': 1,
'downloader/response_status_count/404': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2020, 2, 26, 6, 56, 2, 882823),
'log_count/DEBUG': 2,
'log_count/INFO': 9,
'memusage/max': 53239808,
'memusage/startup': 53239808,
'response_received_count': 2,
'robotstxt/request_count': 1,
'robotstxt/response_count': 1,
'robotstxt/response_status_count/404': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2020, 2, 26, 6, 56, 2, 284441)}
2020-02-26 14:56:02 [scrapy.core.engine] INFO: Spider closed (finished)

为什么什么都没刮?

我认为,应该将callback = self.parse_stock替换为callback = self. parse_back

刮痧有点困难。我向你推荐一个简单的框架。在运行以下示例之前,您需要安装程序包。pip安装-U简化_报废

from simplified_scrapy import Spider, SimplifiedDoc, SimplifiedMain
class StocksSpider(Spider):
name = 'cfi.cn'
allowed_domains = ['quote.cfi.cn/']
start_urls = ['http://quote.cfi.cn/stockList.aspx']
# refresh_urls = True # For debug. If efresh_urls = True, start_urls will be crawled again.
def extract(self, url, html, models, modelNames):
doc = SimplifiedDoc(html)
lstA = None
data = None
if url.url==self.start_urls[0]:
lstA = doc.select('div#divcontent').listA(url=url.url)
else:
text = doc.select('div.Lfont>text()')
kv = text.split('(')
data = [{kv[0]:kv[1].strip(')')}]
return {"Urls": lstA, "Data": data} # Return the data to the framework, and the framework will automatically save it.
SimplifiedMain.startThread(StocksSpider()) # Start

运行后,可以在文件夹data/下找到已爬网的数据。以下是更多示例:https://github.com/yiyedata/simplified-scrapy-demo

最新更新