如何用BeautifulSoup刮出分页表并将结果存储在csv中



我想刮https://www.airport-data.com/manuf/Reims.html并对所有结果进行迭代并将结果提取到AircraftListing.csv

代码运行时没有错误,但结果填充错误,并且并非所有记录都从网页提取到.csv文件

如何将兰斯的所有航空记录导出到AircraftListing.csv?

import requests
from bs4 import BeautifulSoup
import csv
root_url = "https://www.airport-data.com/manuf/Reims.html"
html = requests.get(root_url)
soup = BeautifulSoup(html.text, 'html.parser')
paging = soup.find("table",{"class":"table table-bordered table-condensed"}).find_all("td")
start_page = paging[1].text
last_page = paging[len(paging)-2].text

outfile = open('AircraftListing.csv','w', newline='')
writer = csv.writer(outfile)
writer.writerow(["Tail_Number", "Year_Maker_Model", "C_N","Engines", "Seats", "Location"])

pages = list(range(1,int(last_page)+1))
for page in pages:
url = 'https://www.airport-data.com/manuf/Reims:%s.html' %(page)
html = requests.get(url)
soup = BeautifulSoup(html.text, 'html.parser')
print ('https://www.airport-data.com/manuf/Reims:%s' %(page))
product_name_list = soup.find("table",{"class":"table table-bordered table-condensed"}).find_all("td")
# Each row has 6 elements in it.
# Loop through every sixth element. (The first element of each row)
# Get all the other elements in the row by adding to index of the first.
for i in range(int(len(product_name_list)/6)):
Tail_Number = product_name_list[(i*6)].get_text('td')
Year_Maker_Model = product_name_list[(i*6)+1].get_text('td')
C_N = product_name_list[(i*6)+2].get_text('td')
Engines = product_name_list[(i*6)+3].get_text('td')
Seats = product_name_list[(i*6)+4].get_text('td')
Location = product_name_list[(i*6)+5].get_text('td')
writer.writerow([Tail_Number, Year_Maker_Model, C_N, Engines, Seats, Location])
outfile.close()
print ('Done')

要改进代码,尤其是带有for循环的部分,请尝试更具体地进行选择。与<td>选择<tr>不同,这最大限度地减少了迭代的工作量,而且更通用。

for row in soup.select('table tbody tr'):
writer.writerow([c.text if c.text else '' for c in row.select('td')])

示例

import requests, csv
from bs4 import BeautifulSoup
url = 'https://www.airport-data.com/manuf/Reims.html'
with open('AircraftListing.csv', "w", encoding="utf-8") as f:
writer = csv.writer(f)
writer.writerow(["Tail_Number", "Year_Maker_Model", "C_N","Engines", "Seats", "Location"])
while True:
html = requests.get(url)
soup = BeautifulSoup(html.text, 'html.parser')
for row in soup.select('table tbody tr'):
writer.writerow([c.text if c.text else '' for c in row.select('td')])

if soup.select_one('li.active + li a'):
url = soup.select_one('li.active + li a')['href']
else:
break

输出

Tail Number,Year Maker Model,C/N,Engines,Seats,Location
0008,1987 Reims F406 Caravan II,F406-0008,2,14.0,France
0010,1987 Reims F406 Caravan II,F406-0010,2,12.0,France
13701,0000 Reims FTB337G,0002,2,4.0,Portugal
13705,0000 Reims FTB337G,0016,2,4.0,Portugal
13710,0000 Reims FTB337G,0011,2,4.0,Portugal
...,...,...,...,...,...
ZS-OHP,0000 Reims FR172J Reims Rocket,0496,1,4.0,South Africa
ZS-OTT,1989 Reims F406 Caravan II,F406-0040,2,12.0,South Africa
ZS-OXS,0000 Reims FR172J Reims Rocket,0418,1,4.0,South Africa
ZS-SSC,1988 Reims BPSW,F406-0032,2,12.0,South Africa
ZS-SSE,1990 Reims F406 Caravan II,F406-0043,2,12.0,South Africa

熊猫替代品

在所有51页上迭代的另一种方法是使用pandas.read_html来获取表,将它们附加到列表中,concat()来自所有页的数据帧,并将它们保存为包括所有5085条记录的csv文件。

示例

import requests
import pandas as pd
from bs4 import BeautifulSoup
url = 'https://www.airport-data.com/manuf/Reims.html'
data = []
while True:
#print(url)
html = requests.get(url)
soup = BeautifulSoup(html.text, 'html.parser')
data.append(pd.read_html(soup.select_one('table').prettify())[0])
if soup.select_one('li.active + li a[href]'):
url = soup.select_one('li.active + li a')['href']
else:
break
df = pd.concat(data)
df.to_csv('AircraftListing.csv',index=False)

有更好的方法可以做到这一点,但在第32-40行中使用:

# Each row has 6 elements in it.
# Loop through every sixth element. (The first element of each row)
# Get all the other elements in the row by adding to index of the first.
for i in range(int(len(product_name_list)/6)):
Tail_Number = product_name_list[(i*6)].get_text('td')
Year_Maker_Model = product_name_list[(i*6)+1].get_text('td')
C_N = product_name_list[(i*6)+2].get_text('td')
Engines = product_name_list[(i*6)+3].get_text('td')
Seats = product_name_list[(i*6)+4].get_text('td')
Location = product_name_list[(i*6)+5].get_text('td')
writer.writerow([Tail_Number, Year_Maker_Model, C_N, Engines, Seats, Location])

评论解释了发生了什么。

最新更新