我正在使用scrapy作为网络抓取框架,并为一组公司抓取许多不同的域。我生成了一个 URL 生成器类,该类读取公司文件并为不同网页上的公司生成起始 URL(仅显示一个示例公司)。网络抓取工具对于第一条记录运行良好,但对于其他 URL 则无法运行。我已经测试了 URL 生成器,它返回所有 URL,但由于某种原因这不起作用 start_urls = [start_url.company-site()]。有什么想法吗?
网址生成器文件。
# -*- coding: utf-8 -*-
import os
import os.path
class URL(object):
P=[]
def read(self, filename):
with open(filename) as f:
for line in f:
field = line.split(',')
company = field[1].replace(" ", '+')
adress="{0}+{1}".format(field[5],field[11])
self.P.append("http://www.companywebpage.com/market-search?q={0}".format(company))
def company-site(self):
for i in self.P:
return i
蜘蛛文件
root = os.getcwd()
start_url = URL()
p = os.path.join(root, 'Company_Lists', 'Test_of_company.csv')
start_url.read(p)
class company-spider(BaseSpider):
name = "Company-page"
allowed_domains = ["CompanyDomain.se"]
start_urls = [start_url.company-site()]
替换
def company-site(self):
for i in self.P:
return i
跟
def urls(self):
for url in self.P:
yield url
取代
start_urls = [start_url.company-site()]
跟
start_urls = start_url.urls()
或
start_urls = start_url.P
因为Spider.start_requests
看起来像这样:
def start_requests(self):
for url in self.start_urls:
yield self.make_requests_from_url(url)