在 Python 中使用 BS4 抓取数据,嵌套表



我正在尝试从 baseball-reference.com 中抓取一些数据。我编写了一些代码来从网站的其他部分获取数据,其中表格的编码更简单一些,但特定的页面集显然更复杂。这是我到目前为止的代码。

从 urllib.request import urlopen从 bs4 进口美丽汤

# Declare URL
test_url = 'https://www.baseball-reference.com/boxes/SLN/SLN201704020.shtml'
# Query the website and return the HTML
page = urlopen(test_url)
# Parse the HTML and store
soup = BeautifulSoup(page, 'lxml')
table = soup.find("div", {"class": "table_outer_container"})

这找不到我想要的表(在这个特定的页面上,有两个表,其中包含 At-Bats、RBI、HR、运行等(。我尝试了其他一些事情,例如

table = soup.find_all("table" , {"class": "sortable stats_table"})

但它也不起作用。我也尝试使用熊猫阅读该网站,但没有运气,所以如果有更简单的熊猫方法,我也对此持开放态度。

我知道

这段代码很复杂或很糟糕。但它可以完成工作。您可以随着时间的推移对其进行优化。

from bs4 import BeautifulSoup,Comment
import requests
r = requests.get('https://www.baseball-reference.com/boxes/SLN/SLN201704020.shtml')
soup = BeautifulSoup(r.text, 'lxml')
comments = soup.find_all(string=lambda text:isinstance(text, Comment))
for comment in comments:
    comment.extract()
    #After getting rid of comments we need Soup again
    another_soup = BeautifulSoup(str(comment),'lxml')
    tables = another_soup.find_all('table' ,{"class": "sortable stats_table"})
    for table in tables:
        #Since we can't get id from table we are going to use table header as stat type.
        stat_type = ''
        for data in table.find('thead').find_all('tr'):
            stat_type = data.th.text.strip()
        #You only need batting.
        if stat_type != 'Batting': break
        for data in table.find('tbody').find_all('tr'):
            player = data.th.text.strip()
            stats = data.find_all('td')
            stat_ab = stats[0].text
            stat_r = stats[1].text
            stat_h = stats[2].text
            stat_rbi = stats[3].text
            # Table goes on
            print(player,stat_ab,stat_r,stat_h,stat_rbi)
        print('-------------------------------------')

输出为:

Kyle Schwarber LF 3 0 2 0
Kris Bryant 3B 4 0 0 0
Anthony Rizzo 1B 4 0 1 0
Ben Zobrist RF 3 1 0 0
Addison Russell SS 4 0 1 0
Jason Heyward CF 4 1 1 0
Willson Contreras C 4 1 2 3
Jon Lester P 2 0 0 0
Carl Edwards P 0 0 0 0
Koji Uehara P 0 0 0 0
Tommy La Stella PH 1 0 0 0
Pedro Strop P 0 0 0 0
Jon Jay PH 1 0 0 0
Mike Montgomery P 0 0 0 0
Javier Baez 2B 4 0 1 0
-------------------------------------
Dexter Fowler CF 4 1 1 0
Aledmys Diaz SS 5 0 2 0
Matt Carpenter 1B 4 0 1 1
Jhonny Peralta 3B 4 0 1 0
Seung-hwan Oh P 0 0 0 0
Jose Martinez PH 1 1 1 0
Yadier Molina C 3 0 2 0
Stephen Piscotty RF 3 1 1 0
Jedd Gyorko 2B 2 0 0 0
Kolten Wong PH-2B 2 0 0 0
Randal Grichuk LF 4 1 2 3
Carlos Martinez P 3 0 0 0
Greg Garcia 3B 0 0 0 0
-------------------------------------

这是从两个表中获取数据的另一种方法:

import requests
from bs4 import BeautifulSoup, Comment
res = requests.get("https://www.baseball-reference.com/boxes/SLN/SLN201704020.shtml",headers={"User-Agent":"Mozilla/5.0"})
soup = BeautifulSoup(res.text, 'lxml')
for comment in soup.find_all(string=lambda text:isinstance(text,Comment)):
    data = BeautifulSoup(comment,"lxml")
    for items in data.select("#ChicagoCubsbatting tr,#StLouisCardinalsbatting tr"):
        tds = ' '.join([' '.join(item.text.split()) for item in items.select("th,td")])
        print(tds)

部分输出:

Batting AB R H RBI BB SO PA BA OBP SLG OPS Pit Str WPA aLI WPA+ WPA- RE24 PO A Details
Kyle Schwarber LF 3 0 2 0 0 1 4 .667 .750 1.000 1.750 20 10 0.170 1.75 0.196 -0.026 1.1 2 0 2B,HBP
Kris Bryant 3B 4 0 0 0 0 3 4 .000 .000 .000 .000 19 13 -0.260 2.31 0.000 -0.260 -1.6 0 0 
Anthony Rizzo 1B 4 0 1 0 0 1 4 .250 .250 .250 .500 14 8 -0.214 2.74 0.035 -0.249 -0.9 5 0

最新更新