如何从脚本调用蜘蛛时将随机用户代理添加到刮擦蜘蛛



我想为每个请求添加随机用户代理,以请求其他脚本调用蜘蛛。我的实现如下:

CoreSpider.py

from scrapy.spiders import Rule
import ContentHandler_copy 
class CoreSpider(scrapy.Spider):
name = "final"
def __init__(self):
    self.start_urls = self.read_url()
    self.rules = (
        Rule(
            LinkExtractor(
                unique=True,
            ),
            callback='parse',
            follow=True
        ),
    )

def read_url(self):
    urlList = []
    for filename in glob.glob(os.path.join("/root/Public/company_profiler/seed_list", '*.list')):
        with open(filename, "r") as f:
            for line in f.readlines():
                url = re.sub('n', '', line)
                if "http" not in url:
                    url = "http://" + url
                # print(url)
                urlList.append(url)
    return urlList
def parse(self, response):
    print("URL is: ", response.url)
    print("User agent is : ", response.request.headers['User-Agent'])
    filename = '/root/Public/company_profiler/crawled_page/%s.html' % response.url
    article = Extractor(extractor='LargestContentExtractor', html=response.body).getText()
    print("Article is :", article)
    if len(article.split("n")) < 5:
        print("Skipping to next url : ", article.split("n"))
    else:
        print("Continue parsing: ", article.split("n"))
        ContentHandler_copy.ContentHandler_copy.start(article, response.url)

我正在从脚本中运行此蜘蛛,如下所示 RunSpider.py

from CoreSpider import CoreSpider
from scrapy.crawler import  CrawlerProcess

process = CrawlerProcess()
process.crawl(CoreSpider())
process.start()

它工作正常,现在我想为每个请求随机使用不同的用户代理。我已经成功地将随机用户代理用于抓取项目,但是从其他脚本调用此蜘蛛时无法与此蜘蛛集成。

我的工作settings.py -

BOT_NAME = 'tutorial'
SPIDER_MODULES = ['tutorial.spiders']
NEWSPIDER_MODULE = 'tutorial.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'tutorial (+http://www.yourdomain.com)'
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
'random_useragent.RandomUserAgentMiddleware': 320
}
USER_AGENT_LIST = "tutorial/user-agent.txt"

如何告诉我的CoreSpider.py以编程方式使用此 setting.py 配置?

查看文档,特别是常见做法。可以将设置作为参数提供给构造函数CrawlProcess。或者,如果您使用 Scrapy 项目并想从 settings.py 中获取设置,您可以这样做:

...
from scrapy.utils.project import get_project_settings    
process = CrawlerProcess(get_project_settings())
...

最新更新