使用beautifulsoup和webdriver抓取每个下一页的问题



我试图通过使用BeautifulSoupSelenium从https://www.vietnamworks.com/tim-viec-lam/tat-ca-viec-lam抓取所有的工作链接。

问题是,我只是只能抓取第一页的链接,不知道如何从每个下一页抓取链接。
这是我试过的代码:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support import expected_conditions as EC
import time
import requests
from bs4 import BeautifulSoup
import array as arr
import pandas as pd
#The first line import the Web Driver, and the second import Chrome Options
#-----------------------------------#
#Chrome Options
all_link = []
chrome_options = Options()
chrome_options.add_argument ('--ignore-certificate-errors')
chrome_options.add_argument ("--igcognito")
chrome_options.add_argument ("--window-size=1920x1080")
chrome_options.add_argument ('--headless')
#-----------------------------------#
driver = webdriver.Chrome(chrome_options=chrome_options, executable_path="C:/webdriver/chromedriver.exe")

#Open url
url = "https://www.vietnamworks.com/tim-viec-lam/tat-ca-viec-lam"
driver.get(url)
time.sleep(2)
#-----------------------------------#
page_source = driver.page_source
page = page_source
soup = BeautifulSoup(page_source,"html.parser")

block_job_list = soup.find_all("div",{"class":"d-flex justify-content-center align-items-center logo-area-wrapper logo-border"})
for i in block_job_list:
link = i.find("a")
all_link.append("https://www.vietnamworks.com/"+ link.get("href"))

因为您的问题是遍历页面,所以这段代码将帮助您做到这一点。如前所述,在while循环中插入抓取代码。

from selenium import  webdriver
from selenium.common.exceptions import NoSuchElementException
import time
from webdriver_manager.chrome import  ChromeDriverManager       # use pip install webdriver_manager if not installed
option = webdriver.ChromeOptions()
CDM = ChromeDriverManager()
driver = webdriver.Chrome(CDM.install(),options=option)
url = 'https://www.vietnamworks.com/tim-viec-lam/tat-ca-viec-lam'
driver.get(url)
time.sleep(3)
page_num = 1
links = []
driver.execute_script("window.scrollTo(0, document.body.scrollHeight/2);")
while True:
# create the soup element here so that it can get the page source of every page        
# sample scraping of url's of the jobs posted
for i in driver.find_elements_by_class_name('job-title '):
links.append(i.get_attribute('href'))
# moves to next page
try:
print(f'On page {str(page_num)}')
print()
page_num+=1
driver.find_element_by_link_text(str(page_num)).click()
time.sleep(3)
# checks only at the end of the page
except NoSuchElementException:
print('End of pages')
break
driver.quit()

编辑:

  1. 简化并修改了分页方法
  2. 如果您使用BeautifulSoup,那么您必须在while循环中插入page_sourcesoup变量,因为每次迭代后,源页面代码都会更改。在你的代码中,你只提取了第一页的源代码,因此你得到了重复的输出,这等于页面的数量。
  3. 通过在webdriver-manager包中使用ChromeDriverManager,不需要提及位置/可执行路径。你可以复制粘贴这段代码并在任何有Chrome的机器上运行它安装在里面。如果你必须安装,在运行代码之前在cmd中使用pip install webdriver_manager

警告:避免显示任何帐户的实际用户名密码,就像您在GitHub代码中一样。

最新更新