如何使用Python从网页中获取超链接的文本元素



我正在抓取web数据,只需要返回与超链接关联的文本元素。超链接和文本未知。该类是已知的。以下是示例HTML:

<div class="a-column SsCol" role = "gridcell">
<h3 class="a-spacing-none SsName">
<span class="a-size-medium a-text-bold">
<a href="/gp/aag/main/ref=sm_name_2?ie=UTF8&ids=15112acd">Direct Name</a>
</span>
</h3>
</div>

或者,所需文本可以与图像而不是超链接相关联:

<div class="a-column SsCol" role = "gridcell">
<h3 class="a-spacing-none SsName">
<img alt="Direct Name" src="https://images-hosted.com//01x-j.gi">
</h3>
</div>

我尝试了以下方法:

from lxml import html
import requests
response = requests.get('https://www.exampleurl.com/')
doc = html.fromstring(response.content)
text1 = doc.xpath("//*[contains(@class, 'SsName')]/text()")

我使用lxml而不是BeautifulSoup,但如果推荐的话,我愿意切换。期望的结果是:

print(text1)
['Direct Name']

//*[contains(@alt, '')]/@alt查找所有具有alt元素的标记。实际上,这个xpath是从xpath查询扩展而来的:从标记中获取属性href。你可以选择特定的标签,就像我的text2显示的一样

from lxml import html
text = """
<div class="a-column SsCol" role = "gridcell">
<h3 class="a-spacing-none SsName">
<span class="a-size-medium a-text-bold">
<a href="/gp/aag/main/ref=sm_name_2?ie=UTF8&ids=15112acd">Direct Name</a>
</span>
</h3>
</div>
<div class="a-column SsCol2" role = "gridcell">
<h3 class="a-spacing-none SsName">
<img alt="Direct Name" src="https://images-hosted.com//01x-j.gi">
</h3>
</div>
"""
doc = html.fromstring(text)
text1 = doc.xpath("//*[contains(@alt, '')]/@alt")
print(text1)
text2 = doc.xpath("//div[contains(@class, 'a-column SsCol2')]//*[contains(@alt, '')]/@alt")
print(text2)

我一定会试试Beautiful Soup:

from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')

在结构上导航的一些常见方法

soup.title
# <title>The Dormouse's story</title>
soup.title.name
# u'title'
soup.title.string
# u'The Dormouse's story'
soup.title.parent.name
# u'head'
soup.p
# <p class="title"><b>The Dormouse's story</b></p>
soup.p['class']
# u'title'
soup.a
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
soup.find_all('a')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
soup.find(id="link3")
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>

一个常见的任务是提取页面标签中的所有URL:

for link in soup.find_all('a'):
print(link.get('href'))
# http://example.com/elsie
# http://example.com/lacie

另一个常见的任务是从页面中提取所有文本:

print(soup.get_text())
# The Dormouse's story
#
# The Dormouse's story
#
# Once upon a time there were three little sisters; and their names were
# Elsie,
# Lacie and...

如果你还需要其他东西,你可能想查看他们的文档:美汤

最新更新