我想从几个网站上获得文章。我试过了,但我不知道下一次该怎么办
lm_paper = newspaper.build('https://www.lemonde.fr/')
parisien_paper = newspaper.build('https://www.leparisien.fr/')
papers = [lm_paper, parisien_paper]
news_pool.set(papers, threads_per_source=2) # (3*2) = 6 threads total
news_pool.join()
以下是使用报纸news_pool的方法我确实注意到news_pool的处理时间非常耗时,因为开始打印标题需要几分钟时间。我相信这个时间滞后与后台下载的文章有关。我不确定如何使用Newspaper来加快这个过程。
import newspaper
from newspaper import Config
from newspaper import news_pool
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'
config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10
lm_paper = newspaper.build('https://www.lemonde.fr/', config=config, memoize_articles=False)
parisien_paper = newspaper.build('https://www.leparisien.fr/', config=config, memoize_articles=False)
french_papers = [lm_paper, parisien_paper]
# this setting is adjustable
news_pool.config.number_threads = 2
# this setting is adjustable
news_pool.config.thread_timeout_seconds = 1
news_pool.set(french_papers)
news_pool.join()
for source in french_papers:
for article_extract in source.articles:
if article_extract:
article_extract.parse()
print(article_extract.title)