为什么美丽汤找不到HTML类?



我正在尝试使用python中的请求和BeautifulSoup抓取这个网站:

我想获取文章标签中的所有信息 class ="ficha-jogo".当我运行下面的代码时,x是一个空列表。

url = "https://globoesporte.globo.com/rs/futebol/brasileirao-serie-a/jogo/25-05-2019/gremio-atletico-mg.ghtml"
r = requests.get(url)     
soup = BeautifulSoup(r.content, "lxml")
x = soup.select(".ficha-jogo")
print(x)

我希望它返回文章标签中包含的所有标签,类 ="ficha-jogo".

此网站链接是动态呈现请求article数据。您应该尝试automation selenium库。它允许您抓取dynamic rendering request(js or ajax)页面数据。

from bs4 import BeautifulSoup
from selenium import webdriver
browser = webdriver.Chrome()
url = "https://globoesporte.globo.com/rs/futebol/brasileirao-serie-a/jogo/25-05-2019/gremio-atletico-mg.ghtml"
browser.get(url)
soup = BeautifulSoup(browser.page_source, 'html.parser')
article = soup.find("article",{"class":"ficha-jogo"})
print(article.text)

操作/操作:

GREPaulo Victor 1GOLLeonardo 6LADPedro Geromel 3ZADRodrigues 38ZAEJuninho Capixaba 29LAEMichel  5VOLMaicon 8VOLJean Pyerre 21MECThaciano 16MECEverton 11ATAAlisson 23ATADiego Tardelli 9ATAAndré 90ATAFelipe Vizeu 10ATACAMVictor 1GOLPatric 2LADLeonardo Silva 3ZADIgor Rabello 16ZAEFábio Santos 6LAEJosé Welison 14VOLNathan 23MECJair 88VOLCazares 10MECGeuvânio 49ATALuan 27MECBruninho 43MECRicardo Oliveira 9ATAChará 8ATARenato GaúchoTécnico4 - 3 - 3Esquema TáticoRodrigo SantanaTécnico4 - 4 - 2Esquema TáticoMostrar ficha completaReservasJúlio César 22GOLLéo Moura 2LADRafael Galhardo 42LADRomulo 13VOLDarlan 37VOLMontoya 20MECVico 15ATAPepê 25ATACleiton 40GOLIago Maidana 19ZADHulk 22LAEAdilson 21VOLVinícius 29MECTerans 20MECAlerrandro 44ATAMaicon 11ATAInformações sobre o jogoArena do GrêmioArena Desportiva

下载适用于 chrome 浏览器的硒网络驱动程序:

http://chromedriver.chromium.org/downloads

安装适用于 chrome 浏览器的 Web 驱动程序:

https://christopher.su/2015/selenium-chromedriver-ubuntu/

硒教程:

https://selenium-python.readthedocs.io/

你也可以用requests_html来做到这一点:

from requests_html import HTMLSession
session = HTMLSession()
url = "https://globoesporte.globo.com/rs/futebol/brasileirao-serie-a/jogo/25-05-2019/gremio-atletico-mg.ghtml"
r = session.get(url)
r.html.render()
article = r.html.find('.ficha-jogo', first=True).text
print(article)

最新更新