为 request.get 创建参数时出现一个错误



我正在迭代这个网站,并试图抓取新闻文章的链接。 https://www.usnews.com/search?q=China%20COVID-19#gsc.tab=0&gsc.q=China%20COVID-19&gsc.page=1

首先,我需要获取页面的链接,所以我使用了以下代码:

def scrape(url):
user_agent = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; Touch; rv:11.0) like Gecko'}
request = 0
params = {
'q': 'China%20COVID-19',
'gsc.tab': '0',
'gsc.q': 'China%20COVID-19',
}
pagelinks = []

myarticle = []
for page_no in range(1,3):
try:# to avoid "No connection adapters were found for" error
params['gsc.page'] = page_no
response = requests.get(url=url,
headers=user_agent,
params=params) 
print(response.request.url)

except Exception as e:
print(e)
scrape('https://www.usnews.com/search/')

但是,结果不正确:

https://www.usnews.com/search?q=China%252520COVID-19&gsc.tab=0&gsc.q=China%252520COVID-19&gsc.page=1
https://www.usnews.com/search?q=China%252520COVID-19&gsc.tab=0&gsc.q=China%252520COVID-19&gsc.page=2

预期结果应如下所示:

https://www.usnews.com/search?q=China%20COVID-19#gsc.tab=0&gsc.q=China%20COVID-19&gsc.page=1
https://www.usnews.com/search?q=China%20COVID-19#gsc.tab=0&gsc.q=China%20COVID-19&gsc.page=2

有人可以帮助我解决此错误吗?我将不胜感激!

如果您在浏览器中打开您获得的 URL,您将看到搜索字符串是China%2520COVID-19,而不是China COVID-19,如您所期望的那样。

在查询字符串中看到的%20urlencoded空格字符。请参阅 MDN 上的百分比编码。如果你urldecodeChina%2520COVID-19,你会得到China%20COVID-19,由此我得到%25是编码的百分比字符。

可能requests已经urlencodes查询字符串值,因此不需要这样做。您可以更改使其工作的是使用解码的值,例如空格 (( 而不是https://www.usnews.com/search#....

params = {
'q': 'China COVID-19',
'gsc.tab': '0',
'gsc.q': 'China COVID-19',
}

在浏览器中搜索会产生"散列"URL(requests.get(..., params=params)(,您需要自己构建URL.
通过https://www.usnews.com/search?...将参数传递给请求会创建一个常规查询字符串(CC_15(,导致错误的页面(总是第一个(。

import requests
from urllib.parse import urlencode, unquote
def scrape(url):
user_agent = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; Touch; rv:11.0) like Gecko'}
request = 0
url = 'https://www.usnews.com/search'
params = {
'q': 'China COVID-19',
'gsc.tab': '0',
'gsc.q': 'China COVID-19'
}
pagelinks = []
myarticle = []
for page_no in range(1,4):
params['gsc.page'] = page_no
_url = '%s#%s' % (url, urlencode(params))
try:# to avoid "No connection adapters were found for" error
response = requests.get(url=_url,
headers=user_agent)
print(_url, '>>', _url == unquote(response.request.url))
except Exception as e:
print(e)
scrape('https://www.usnews.com/search/')

输出:

https://www.usnews.com/search#q=China+COVID-19&gsc.tab=0&gsc.q=China+COVID-19&gsc.page=1 >> True
https://www.usnews.com/search#q=China+COVID-19&gsc.tab=0&gsc.q=China+COVID-19&gsc.page=2 >> True
https://www.usnews.com/search#q=China+COVID-19&gsc.tab=0&gsc.q=China+COVID-19&gsc.page=3 >> True

相关内容

最新更新