无法让网页抓取抓取所需的所有数据



我正在做一个项目,我试图从这个维基百科页面抓取数据,我想要带有年份的列(恰好是<th>)和第四列"华特迪士尼乐园和度假村"。

法典:

from urllib.request import urlopen
from bs4 import BeautifulSoup
html = urlopen("https://en.wikipedia.org/wiki/The_Walt_Disney_Company#Revenues")
bsObj = BeautifulSoup(html, "html.parser")
t = open("scrape_project.txt", "w")
year = bsObj.find("table", {"class":"wikitable"}).tr.next_sibling.next_sibling.th
money = bsObj.find("table", {"class":"wikitable"}).td.next_sibling.next_sibling.next_sibling.next_sibling
for year_data in year:
    year.sup.clear()
    print(year.get_text())
for revenue in money:
    print(money.get_text())

t.close()

现在,当我通过终端运行它时,打印的所有内容都是 1991(两次)和 2,794。我需要它来打印华特迪士尼乐园和度假村的所有年份和相关收入。我也在尝试让它写入文件"scrape_project.tx"

任何帮助将不胜感激!

from urllib.request import urlopen
from bs4 import BeautifulSoup
html = urlopen("https://en.wikipedia.org/wiki/The_Walt_Disney_Company#Revenues")
soup = BeautifulSoup(html)
t = open("scrape_project.txt", "w")
table = soup.find('table', {"class": "wikitable"})
# get all rows, skipping first empty
data = table.select("tr")[1:]
# year data is in the scope attribute
years = [td.select("th[scope]")[0].text[:4] for td in data]
# Walt Disney Parks and Resort is the third element in each row
rec = [td.select("td")[2].text for td in data]
from pprint import pprint as pp
pp(years)
pp(rec)

这将为您提供数据:

['1991',
 '1992',
 '1993',
 '1994',
 '1995',
 '1996',
 '1997',
 '1998',
 '1999',
 '2000',
 '2001',
 '2002',
 '2003',
 '2004',
 '2005',
 '2006',
 '2007',
 '2008',
 '2009',
 '2010',
 '2011',
 '2012',
 '2013',
 '2014']
['2,794.0',
 '3,306',
 '3,440.7',
 '3,463.6',
 '3,959.8',
 '4,142[Rev 3]',
 '5,014',
 '5,532',
 '6,106',
 '6,803',
 '6,009',
 '6,691',
 '6,412',
 '7,750',
 '9,023',
 '9,925',
 '10,626',
 '11,504',
 '10,667',
 '10,761',
 '11,797',
 '12,920',
 '14,087',
 '15,099']

我用text[:4]切掉了修订版,如果你想保留信息,那就不要切片。如果您还想从资金中删除,即从'4,142[Rev 3]'中删除 Rev 3,您可以使用正则表达式:

import re
m = re.compile("d+,d+")
rec = [m.search(td.select("td")[2].text).group() for td in data]

这将为您提供:

['2,794',
 '3,306',
 '3,440',
 '3,463',
 '3,959',
 '4,142',
 '5,014',
 '5,532',
 '6,106',
 '6,803',
 '6,009',
 '6,691',
 '6,412',
 '7,750',
 '9,023',
 '9,925',
 '10,626',
 '11,504',
 '10,667',
 '10,761',
 '11,797',
 '12,920',
 '14,087',
 '15,099']

必须有一种更干净的方法来进入那里,但这就可以了。

from urllib.request import urlopen
from bs4 import BeautifulSoup
html = urlopen("https://en.wikipedia.org/wiki/The_Walt_Disney_Company#Revenues")
soup = BeautifulSoup(html, "html.parser")
table = soup.find("table", {"class":"wikitable"})
rows = [row for row in table.findAll("th", {"scope":"row"})]
for each in rows:
    string = each.text[:4] + ", $" + 
          each.next_sibling.next_sibling.next_sibling.next_sibling.next_sibling.next_sibling.text)

最新更新