Scrapy不会使用文件下载文件



我不知道我的代码发生了什么。我写了我的蜘蛛和加载项目描述在https://docs.scrapy.org/en/latest/topics/media-pipeline.html,但scrapy不下载任何文件:

2022-07-19 01:35:09 [scrapy.core.engine] INFO: Spider opened
2022-07-19 01:35:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-07-19 01:35:09 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-07-19 01:35:09 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://www.tp-link.com/br/support/download/> from <GET https://www.tp-link.com/br/support/download>
2022-07-19 01:35:09 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.tp-link.com/br/support/download/> (referer: None)
2022-07-19 01:35:10 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.tp-link.com/br/support/download/archer-c7/> (referer: https://www.tp-link.com/br/support/download/)
2022-07-19 01:35:10 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.tp-link.com/br/support/download/archer-c7/v4/> (referer: https://www.tp-link.com/br/support/download/archer-c7/)
2022-07-19 01:35:10 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.tp-link.com/br/support/download/archer-c7/v4/>
{'file_urls': ['https://static.tp-link.com/2019/201912/20191206/Archer%20C7(US)_V4_190411.zip',
'https://static.tp-link.com/2018/201804/20180428/Archer%20C7(US)_V4_180425.zip',
'https://static.tp-link.com/2017/201712/20171221/Archer%20C7(US)_V4_171101.zip'],
'model': 'Archer C7 ',
'vendor': 'TP-Link',
'version': 'V4'}
2022-07-19 01:35:10 [scrapy.core.engine] INFO: Closing spider (finished)
2022-07-19 01:35:10 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1082,
'downloader/request_count': 4,
'downloader/request_method_count/GET': 4,
'downloader/response_bytes': 77004,
'downloader/response_count': 4,
'downloader/response_status_count/200': 3,
'downloader/response_status_count/301': 1,
'elapsed_time_seconds': 1.45197,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2022, 7, 19, 1, 35, 10, 957079),
'httpcompression/response_bytes': 667235,
'httpcompression/response_count': 3,
'item_scraped_count': 1,
'log_count/DEBUG': 6,
'log_count/INFO': 10,
'memusage/max': 93999104,
'memusage/startup': 93999104,
'request_depth_max': 2,
'response_received_count': 3,
'scheduler/dequeued': 4,
'scheduler/dequeued/memory': 4,
'scheduler/enqueued': 4,
'scheduler/enqueued/memory': 4,
'start_time': datetime.datetime(2022, 7, 19, 1, 35, 9, 505109)}
2022-07-19 01:35:10 [scrapy.core.engine] INFO: Spider closed (finished)

My Item Class我定义了一些自定义字段,但是file_urls文件被创建为scrapy文档:

class FirmwareItem(scrapy.Item):
# define the fields for your item here like:
vendor = scrapy.Field()
model = scrapy.Field()
version = scrapy.Field()
file_urls = scrapy.Field()
files = scrapy.Field()

ParseMethod这里我得到了zip文件链接,并将它们作为列表加载到'file_urls':

def parseVersion(self,response):

firmware = FirmwareItem()
firmware['vendor'] = "TP-Link"
firmware['model'] = response.xpath('//em[@id="model-version-name"]/text()').get()
firmware['version'] = response.xpath('//em[@id="model-version-name"]/span/text()').get()
urls = response.xpath('//div[@data-id="Firmware"]//a[@class="download-resource-btn ga-click"]/@href').getall()

firmware['file_urls'] = [url.replace(' ',"%20") for url in urls]
yield firmware
设置>启用项目管道并设置默认dir:
ITEM_PIPELINES = {
'router.pipelines.RouterPipeline': 300,
}
FILES_STORE = "downloaded"

管道保持默认值:

from itemadapter import ItemAdapter
class RouterPipeline:
def process_item(self, item, spider):
return item

你就快成功了…确保在settings.py:

中设置了所有这些设置
ITEM_PIPELINES = {
'scrapy.pipelines.files.FilesPipeline': 1,
'router.pipelines.RouterPipeline': 300,
}
FILES_STORE = '/path/to/save/directory'  # set your preferred download directory
FILES_URLS_FIELD = 'files_urls'  # copy verbatim
FILES_RESULT_FIELD = 'files'     # copy verbatim

相关内容

  • 没有找到相关文章

最新更新