报纸(python)获取所有cnn新闻url



例如在这个url中(https://edition.cnn.com/search/?q=%20news&size=10&从=5540&第555页(

在html文件中,我可以找到这个链接(html标签(

<div class="cnn-search__result-thumbnail">         
<a href="https://www.cnn.com/2018/03/27/asia/north-korea-kim-jong-un-china-visit/index.html">
<img src="./Search CNN - Videos, Pictures, and News - 
CNN.com_files/180328104116china-xi-kim-story-body.jpg">
</a>

但是在这个代码

cnn_paper = newspaper.build(url, memoize_articles=False)
for article in cnn_paper.articles:
print(article.url) 

我找不到新闻链接

https://edition.cnn.com/search/?q=%20news&size=10&从=5540&第555页https://edition.cnn.com/search/?q=%20news&size=10&从=5550&第556页

获得相同的链接

来自不同请求的JSON文件动态显示搜索结果:https://search.api.cnn.io/content?q=news&size=50&from=0

的最大尺寸可以是50

res = requests.get("https://search.api.cnn.io/content?q=news&size=50&from=0")
links = [x['url'] for x in res.json()['result']]

这能满足您的要求吗?

from bs4 import BeautifulSoup
import urllib.request
for numb in ('1', '100'):
resp = urllib.request.urlopen("https://edition.cnn.com/search/?q=%20news&size=10&from=5540&page=555")
soup = BeautifulSoup(resp, from_encoding=resp.info().get_param('charset'))
for link in soup.find_all('a', href=True):
print(link['href'])

或者,也许是这个?

from bs4 import BeautifulSoup
from bs4.dammit import EncodingDetector
import requests
resp = requests.get("https://edition.cnn.com/search/?q=%20news&size=10&from=5540&page=555")
http_encoding = resp.encoding if 'charset' in resp.headers.get('content-type', '').lower() else None
html_encoding = EncodingDetector.find_declared_encoding(resp.content, is_html=True)
encoding = html_encoding or http_encoding
soup = BeautifulSoup(resp.content, from_encoding=encoding)
for link in soup.find_all('a', href=True):
print(link)

相关内容

  • 没有找到相关文章

最新更新