BeautifulSoup:是否有办法设置find_all()方法的起点?



给定一个soup,我需要用class="foo"得到n元素。

可以这样做:

soup.find_all(class_='foo', limit=n)

然而,这是一个缓慢的过程,因为我试图找到的元素位于文档的最底部。

下面是我的代码:
main_num = 1
main_page = 'https://rawdevart.com/search/?page={p_num}&ctype_inc=0'
# get_soup returns bs4 soup of a link
main_soup = get_soup(main_page.format(p_num=main_num))

# get_last_page returns the number of pages which is 64
last_page_num = get_last_page(main_soup) 
for sub_num in range(1, last_page_num+1):
sub_soup = get_soup(main_page.format(p_num=sub_num))
arr_links = sub_soup.find_all(class_='head')
# process arr_links

head是本页上a标记的属性,所以我假设您想抓住所有跟随链接并继续移动所有搜索页面。

你可以这样做:

import requests
from bs4 import BeautifulSoup
base_url = "https://rawdevart.com"
total_pages = BeautifulSoup(
requests.get(f"{base_url}/search/?page=1&ctype_inc=0").text,
"html.parser",
).find(
"small",
class_="d-block text-muted",
).getText().split()[2]
pages = [
f"{base_url}/search/?page={n}&ctype_inc=0"
for n in range(1, int(total_pages) + 1)
]
all_follow_links = []
for page in pages[:2]:
r = requests.get(page).text
all_follow_links.extend(
[
f'{base_url}{a["href"]}' for a in
BeautifulSoup(r, "html.parser").find_all("a", class_="head")
]
)
print(all_follow_links)

输出:

https://rawdevart.com/comic/my-death-flags-show-no-sign-ending/
https://rawdevart.com/comic/tsuki-ga-michibiku-isekai-douchuu/
https://rawdevart.com/comic/im-not-a-villainess-just-because-i-can-control-darkness-doesnt-mean-im-a-bad-person/
https://rawdevart.com/comic/tensei-kusushi-wa-isekai-wo-meguru/
https://rawdevart.com/comic/iceblade-magician-rules-over-world/
https://rawdevart.com/comic/isekai-demo-bunan-ni-ikitai-shoukougun/
https://rawdevart.com/comic/every-class-has-been-mass-summoned-i-strongest-under-disguise-weakest-merchant/
https://rawdevart.com/comic/isekai-onsen-ni-tensei-shita-ore-no-kounou-ga-tondemosugiru/
https://rawdevart.com/comic/kubo-san-wa-boku-mobu-wo-yurusanai/
https://rawdevart.com/comic/gabriel-dropout/
and more ...

注意:要获得所有页面,只需从这一行删除slicing:

for page in pages[:2]:
# the rest of the loop body

所以它看起来像这样:

for page in pages:
# the rest of the loop body

最新更新