为什么从pywikibot调用一个scratch spider会出现ReactorNotRestartable错误



我可以使用CrawlerRunner或CrawlerProcess从另一个Python脚本中调用一个scratch spider。但是,当我试图从pywikibot机器人调用同一个spider调用类时,我会得到一个ReactorNotRestartable错误。为什么会这样?我该如何修复?

错误如下:

File ".scriptsuserscriptsReplicationWikiRWLoad.py", line 161, in format_new_page
aea = AEAMetadata(url=DOI_url)
File ".scriptsuserscriptsReplicationWikiGetAEAMetadata.py", line 39, in __init__
reactor.run() # the script will block here until all crawling jobs are finished
File "C:Userslextr.condaenvspy37libsite-packagestwistedinternetbase.py", line 1282, in run
self.startRunning(installSignalHandlers=installSignalHandlers)
File "C:Userslextr.condaenvspy37libsite-packagestwistedinternetbase.py", line 1262, in startRunning
ReactorBase.startRunning(self)
File "C:Userslextr.condaenvspy37libsite-packagestwistedinternetbase.py", line 765, in startRunning
raise error.ReactorNotRestartable()
twisted.internet.error.ReactorNotRestartable
CRITICAL: Exiting due to uncaught exception <class 'twisted.internet.error.ReactorNotRestartable'>

这是一个叫"我的小蜘蛛"的剧本。如果我只是从主课堂上给全班打电话,它运行得很好。

from twisted.internet import reactor, defer
from scrapy import signals
from scrapy.crawler import Crawler, CrawlerProcess, CrawlerRunner
from scrapy.settings import Settings
from scrapy.utils.project import get_project_settings
from Scrapers.spiders.ScrapeAEA import ScrapeaeaSpider
class AEAMetadata:
"""
Helper to run ScrapeAEA spider and return JEL codes and data links
for a given AEA article link.
"""
def __init__(self, *args, **kwargs):
"""Initializer"""
url = kwargs.get('url')
if not url:
raise ValueError('No article url given')
self.items = []
def collect_items(item, response, spider):
self.items.append(item)
settings = get_project_settings()
crawler = Crawler(ScrapeaeaSpider, settings)
crawler.signals.connect(collect_items, signals.item_scraped)
runner = CrawlerRunner(settings)
d = runner.crawl(crawler, url=url)
d.addBoth(lambda _: reactor.stop())
reactor.run() # the script will block here until all crawling jobs are finished
#process = CrawlerProcess(settings)
#process.crawl(crawler, url=url)
#process.start()  # the script will block here until the crawling is finished
def get_jelcodes(self):
jelcodes = self.items[0]['jelcodes']
return jelcodes
def main():
aea = AEAMetadata(url='https://doi.org/10.1257/app.20180286')
jelcodes = aea.get_jelcodes()
print(jelcodes)
if __name__ == '__main__':
main()

更新了两次实例化AEAMetadata类的简单测试。以下是我的pywikibot机器人程序中失败的调用代码:

from GetAEAMetadata import AEAMetadata
def main(*args):
for _ in [1,2]:
print('Top')
url = 'https://doi.org/10.1257/app.20170442'
aea = AEAMetadata(url=url)
print('After AEAMetadata')
jelcodes = aea.get_jelcodes()
print(jelcodes)

if __name__ == '__main__':
main()

我对AEAMetadata的调用嵌入了一个更大的脚本中,这让我误以为AEAMetadataclass在失败前只实例化过一次。事实上,AEAMetadata被调用了两次。

而且,我还认为脚本会在reactor.run((之后阻塞,因为所有零碎示例中的注释都说明了这一点。然而,第二个延迟回调是reactor.stop((,它取消了reactor.run((.

一个更基本的错误假设是在每次迭代中都删除并重新创建了reactor。事实上,reactor是在第一次导入时实例化和初始化的。而且,它是一个全局对象,其寿命与底层进程一样长,并且不是为重新启动而设计的。删除和重新启动反应堆实际需要的极端情况如下所述:http://www.blog.pythonlibrary.org/2016/09/14/restarting-a-twisted-reactor/

所以,我想我已经回答了我自己的问题。而且,我正在重写我的脚本,这样它就不会试图以从未想过的方式使用反应堆。

感谢Gallaecio让我朝着正确的方向思考。

最新更新