Python Web抓取漂亮的汤列表错误



嗨,我正在尝试从BGS钻孔扫描中下载图像,其中有多个页面,例如。http://scans.bgs.ac.uk/sobi_scans/boreholes/795279/images/10306199.htmlhttp://scans.bgs.ac.uk/sobi_scans/boreholes/18913699/images/18910430.html

我设法下载了第一个例子的前两页,但当我到达最后一页时,我收到了这个错误。在此页面上,NextPage变量应为None,因为该标签不在网页上。在这一点上,我想继续到下一个位置,我还没有添加,但我有一个URL的excel列表。代码基于此https://automatetheboringstuff.com/2e/chapter12/

追踪(最近一次通话(:文件";C:/Users/brentond/Documents/Python/Pdf-BGS Scans.py";,第73行,in下一页=汤。选择('a[title="下一页"]'([0]IndexError:列出超出范围的索引

从URL的excel列表下载BGS钻孔扫描

import pyautogui
import pyperclip
import webbrowser
import PyPDF2
import os
import openpyxl
import pdfkit
import requests
import bs4

# Define path of excel file
from requests import Response
path = r'C:UsersbrentondDocumentsTA2'
# Change directory to target location
os.chdir(path)
# Create workbook object
wb = openpyxl.load_workbook('BGS Boreholes.xlsm')
# Create worksheet object
ws = wb.get_sheet_by_name('Open')
# Assign URL to variable
StartURL = ws['A2'].value
URL = StartURL
NextURL = "NextURL"
# Assign BH ID to variable
Location = ws['B2'].value
while NextURL is not None:
# Download URL
res = requests.get(URL)  # type: Response
res.raise_for_status()
# Create beautiful soup object
soup = bs4.BeautifulSoup(res.text, 'html.parser')
# Find the URL of the borehole scan image.
Scan = soup.select('#image_content img')
# Check on HTML elements
Address = soup.select('#image')
AddressText = Address[0].get('src')
print(AddressText)
print()
if Scan == []:
print('Could not find scan image.')
else:
ScanUrl = Scan[0].get('src')
# Download the image.
print('Downloading image %s...' % (ScanUrl))
res = requests.get(ScanUrl)
res.raise_for_status()
# Save the image to path
PageNo = 0
imageFile = open(os.path.join(path, Location) + "-Page" + str(PageNo) + ".png", 'wb')
for chunk in res.iter_content(100000):
imageFile.write(chunk)
imageFile.close()
# Find URL for next page
PageNo = PageNo + 1
NextPage = soup.select('a[title="Next page"]')[0]
if NextPage ==[]:
continue
else:
print(NextPage)
NextURL = NextPage.get('href')
URL = NextURL
print(NextURL)
print('Done.')

如果不存在,则无法选择其第一个元素。您可以尝试首先使用find/find_all验证元素的存在,也可以使用try/except来说明IndexError并在出现错误的情况下修改脚本行为。

因此,由于锚点不存在,soup.select('a[title="Next page"]')将始终返回一个空列表。因此,键零将不存在,因此会引发IndexError。

改变最简单的方法

NextPage = soup.select('a[title="Next page"]')[0]
if NextPage ==[]:
continue
else:
print(NextPage)
NextURL = NextPage.get('href')

NextPage = soup.select('a[title="Next page"]')
if not NextPage:
continue
else:
NextPage = NextPage[0]
print(NextPage)
NextURL = NextPage.get('href')

NextPage = soup.select('a[title="Next page"]')
if not NextPage:
continue
else:
print(NextPage[0])
NextURL = NextPage[0].get('href')

根据您的个人喜好

最新更新