我有一个工作代码,可以使用selenium和Python从URL获取详细信息。 但是在搜索 50 多个 URL 后面临问题,谷歌浏览器显示"我不是机器人"选项并要求选中该复选框。
但在那之后无法得到结果,此后没有一致的结果或错误的结果显示。
那么有没有办法避免这种"我不是机器人"的验证码并获得一致的结果呢? 或者我需要在此代码中修改以使其更加优化的任何内容?
也可以同时在chrome驱动程序中打开50或100个选项卡并在加载的选项卡中搜索结果吗?
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
import psycopg2
import os
import datetime
final_results=[]
positions=[]
option = webdriver.ChromeOptions()
option.add_argument("—-incognito")
browser = webdriver.Chrome(executable_path='/users/user_123/downloads/chrome_driver/chromedriver', chrome_options=option)
#def db_connect():
try:
#Database connection string
DSN = "dbname='postgres' user='postgres' host='localhost' password='postgres' port='5432'"
#DWH table to which data is ported
TABLE_NAME = 'staging.search_url'
#Connecting DB..
conn = psycopg2.connect(DSN)
print("Database connected...")
#conn.set_client_encoding('utf-8')
cur = conn.cursor()
cur.execute("SET datestyle='German'")
except (Exception, psycopg2.Error) as error:
print('database connection failed')
quit()
def get_products(url):
browser.get(url)
names = browser.find_elements_by_xpath("//span[@class='pymv4e']")
upd_product_name_list=list(filter(None, names))
product_name = [x.text for x in upd_product_name_list]
product = [x for x in product_name if len(x.strip()) > 2]
upd_product_name_list.clear()
product_name.clear()
return product
##################################
search_url_fetch="""select url_to_be_searched from staging.search_url where id in(65,66,67,68)"""
psql_cursor = conn.cursor()
psql_cursor.execute(search_url_fetch)
serach_url_list = psql_cursor.fetchall()
print('Fetched DB values')
##################################
for row in serach_url_list:
passed_url=''
passed_url=str(row)
passed_url=passed_url.replace(',)','')
passed_url=passed_url.replace('(','')
new_url=''
new_url=passed_url[1:len(passed_url)-1]
print('Passed URL :'+new_url)
print("n")
filtered=[]
filtered.clear()
filtered = get_products(new_url)
if not filtered:
new_url=new_url+'+kaufen'
get_products(new_url)
print('Modified URL :'+new_url)
if filtered:
print(filtered)
positions.clear()
for x in range(1, len(filtered)+1):
positions.append(str(x))
gobal_position=0
gobal_position=len(positions)
print('global postion first: '+str(gobal_position))
print("n")
company_name_list = browser.find_elements_by_xpath("//div[@class='LbUacb']")
# use list comprehension to get the actual repo titles and not the selenium objects.
company = []
company.clear()
company = [x.text for x in company_name_list]
# print out all the titles.
print('Company Name:')
print(company, 'n')
price_list = browser.find_elements_by_xpath("//div[@class='e10twf T4OwTb']")
# use list comprehension to get the actual repo titles and not the selenium objects.
price = []
price.clear()
price = [x.text for x in price_list]
print('Price:')
print(price)
print("n")
urls=[]
urls.clear()
find_href = browser.find_elements_by_xpath("//a[@class='plantl pla-unit-single-clickable-target clickable-card']")
for my_href in find_href:
url_list=my_href.get_attribute("href")
urls.append(url_list)
#print(my_href.get_attribute("href"))
print('URLS:')
print(urls)
print("n")
print('Final Result: ')
result = zip(positions,filtered, urls, company,price)
final_results.clear()
final_results.append(tuple(result))
print(final_results)
print("n")
print('global postion end :'+str(gobal_position))
i=0
#try:
for d in final_results:
print( d[i])
while i <= gobal_position:
cur.execute("""INSERT into staging.pla_crawler_results(position, product_name, url,company,price) VALUES (%s, %s, %s,%s, %s)""", d[i])
print('Inserted succesfully')
conn.commit()
i=i+1
#except (Exception, psycopg2.Error) as error:
#pass
您有两个选择:
1(为访问谷歌的搜索API付费。这是避免被禁止的专业方法。
2(随机化你的脚本,使其看起来更人性化。这种做法是与谷歌的军备竞赛;您可能可以让脚本正常工作,但它会定期中断。
更多的优化(在性能意义上(可能会使这个问题变得更糟,而不是更好。
代码尝试的事情:
- 用头启动Chrome。看起来您不像是在使用无头,而只是在那里提醒。
- 随机化与网页交互之间的时间。获得机器人检查的最快方法是尝试更快。
- 排除
enable-automation
参数;请参阅 https://stackoverflow.com/a/56635123/9642
选择:
- https://serpapi.com/是用于抓取谷歌的付费服务
afarley建议为Google的Search API付费,但我不知道这甚至是一种选择。谷歌曾经有一个基本的搜索API免费用于前10个结果(有严格的配额限制(,但我不再看到它可用了。