如何刮硒产品详细信息页面



我正在学习硒。现在我的这个代码可以从这个url的字体页面中抓取所有产品标题https://www.daraz.com.bd/consumer-electronics/?spm=a2a0e.pdp.breadcrumb.1.4d20110bzkC0bn但我想点击这个页面的每个产品链接,它会把我带到产品详细信息页面,这样我就可以从产品详细信息页中抓取信息。这是我的代码:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
#argument for incognito Chrome
option = webdriver.ChromeOptions()
option.add_argument(" — incognito")
browser = webdriver.Chrome()
browser.get("https://www.daraz.com.bd/consumer-electronics/?spm=a2a0e.pdp.breadcrumb.1.4d20110bzkC0bn")
# Wait 20 seconds for page to load
timeout = 20
try:
WebDriverWait(browser, timeout).until(EC.visibility_of_element_located((By.XPATH, "//div[@class='c16H9d']")))
except TimeoutException:
print("Timed out waiting for page to load")
browser.quit()

# find_elements_by_xpath returns an array of selenium objects.
titles_element = browser.find_elements_by_xpath("//div[@class='c16H9d']")

# use list comprehension to get the actual repo titles and not the selenium objects.
titles = [x.text for x in titles_element]
# print out all the titles.
print('titles:')
print(titles, 'n')
browser.quit()

我建议获得href并逐个打开。

您需要这个定位器:By.XPATH, "//div[@class='c16H9d']//a",并使用.visibility_of_all_elements_located来等待所有元素,而不是.visibility_of_element_located

之后,使用此方法获取href:.get_attribute('href')

并打开一个新窗口,其中已获得特定的href

browser.get("https://www.daraz.com.bd/consumer-electronics/?spm=a2a0e.pdp.breadcrumb.1.4d20110bzkC0bn")
# Wait 20 seconds for page to load
timeout = 20
elements = WebDriverWait(browser, timeout).until(EC.visibility_of_all_elements_located((By.XPATH, "//div[@class='c16H9d']//a")))
for element in elements:
#get href
href = element.get_attribute('href')
print(href)
#open new window with specific href
browser.execute_script("window.open('" +href +"');")
# switch to new window
browser.switch_to.window(browser.window_handles[1])

#......now you are on the new window, scrape here
#example to scrape 'title' in the new window
xx = WebDriverWait(browser, timeout).until(EC.visibility_of_element_located((By.CLASS_NAME, "pdp-mod-product-badge-title")))
print(xx.text)

#close the new window
browser.close()
#back to main window
browser.switch_to.window(browser.window_handles[0])
browser.quit()

您可以使用BeautifulSoup来简化生活。

我稍微修改了您的代码,以说明如何在页面上导航所有单独的产品链接。

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from bs4 import BeautifulSoup
#argument for incognito Chrome
option = Options()
option.add_argument("--incognito")

browser = webdriver.Chrome(options=option)
browser.get("https://www.daraz.com.bd/consumer-electronics/?spm=a2a0e.pdp.breadcrumb.1.4d20110bzkC0bn")
# Wait 20 seconds for page to load
timeout = 20
try:
WebDriverWait(browser, timeout).until(EC.visibility_of_element_located((By.XPATH, "//div[@class='c16H9d']")))
except TimeoutException:
print("Timed out waiting for page to load")
browser.quit()
soup = BeautifulSoup(browser.page_source, "html.parser")
product_items = soup.find_all("div", attrs={"data-qa-locator": "product-item"})
for item in product_items:
item_url = f"https:{item.find('a')['href']}"
print(item_url)
browser.get(item_url)
item_soup = BeautifulSoup(browser.page_source, "html.parser")
# Use the item_soup to find details about the item from its url.
browser.quit()

简言之,这正是阿伦迪普·乔汉在评论部分提到的。您可以选择创建一个browser的新实例,比如browser1 = webdriver.Chrome(),它可以导航所有的产品URL。

此外,我意识到incognito模式在您的脚本中不起作用。您需要定义chrome_options,并将其作为参数传递给webdriver.Chrome方法。

相关内容

  • 没有找到相关文章

最新更新