我刚刚开始学习python网络抓取,我想学习如何从NFL网站上抓取数据以显示所有球员及其统计数据,但是我在Beautifulsoup中遇到了这个错误。
import requests
from bs4 import BeautifulSoup
url = "https://www.pro-football-reference.com/years/2021/passing.htm"
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
league_table = soup.find('table', class_ = 'per_match_toggle sortable stats_table')
for name in league_table.find_all('tbody'):
rows = name.find_all('tr')
for row in rows:
name = row.find('td', class_ = 'left').text.strip()
yards = row.find_all('td', class_ = 'right')[7].text
touchdowns = row.find_all('td', class_ = 'right')[8].text
print("Name " + name + " Yards " + yards + " Touchdowns " + touchdowns)
错误:
name = row.find('td', class_ = 'left').text.strip()
发生这种情况是因为find()
可以返回显然没有text
属性的None
。
当您要搜索的元素不存在或将错误的参数传递给搜索函数时,可能会发生这种情况。
您应该用try-except
子句或if else
包装有问题的部分,以便处理此类情况
发生这种情况是因为您会注意到在詹姆斯温斯顿之后,有一排标题。因此<tr>
标签由<th>
标签组成,而不是<td>
标签。所以它到了那一行,你说.find('td')
,它不包含,所以它返回None
.然后你想从中获取文本,你从None
得到.text
。
因此,您需要像上一篇文章建议的那样,使用try/except 或仅接受带有<td>
标签的行的逻辑。
就个人而言,我只会使用熊猫来抓取表格,删除标题行,然后遍历这些行。
import pandas as pd
url = "https://www.pro-football-reference.com/years/2021/passing.htm"
df = pd.read_html(url)[0]
df = df[df['Player'].ne('Player')]
for idx, row in df.iterrows():
name = row['Player']
yards = row['Yds']
touchdowns = row['TD']
print("Name " + name + " Yards " + yards + " Touchdowns " + touchdowns)