urllib.request.urlopen(url) with Authentication



几天来我一直在玩漂亮的汤和解析网页。我一直在使用一行代码,这在我编写的所有脚本中都是我的救星。代码行为:

r = requests.get('some_url', auth=('my_username', 'my_password')).

但。。。

我想用(使用身份验证打开 URL(做同样的事情:

(1) sauce = urllib.request.urlopen(url).read() (1)
(2) soup = bs.BeautifulSoup(sauce,"html.parser") (2)

我无法打开网址并阅读需要身份验证的网页。我如何实现这样的事情:

  (3) sauce = urllib.request.urlopen(url, auth=(username, password)).read() (3) 
instead of (1)

您正在使用HTTP Basic Authentication

import urllib2, base64
request = urllib2.Request(url)
base64string = base64.b64encode('%s:%s' % (username, password))
request.add_header("Authorization", "Basic %s" % base64string)   
result = urllib2.urlopen(request)

因此,您应该base64对用户名和密码进行编码,并将其作为Authorization标头发送。

看看 如何使用官方文档中的 urllib 包获取互联网资源:

# create a password manager
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
# Add the username and password.
# If we knew the realm, we could use it instead of None.
top_level_url = "http://example.com/foo/"
password_mgr.add_password(None, top_level_url, username, password)
handler = urllib.request.HTTPBasicAuthHandler(password_mgr)
# create "opener" (OpenerDirector instance)
opener = urllib.request.build_opener(handler)
# use the opener to fetch a URL
opener.open(a_url)
# Install the opener.
# Now all calls to urllib.request.urlopen use our opener.
urllib.request.install_opener(opener)

使用 urllib3 :

import urllib3
http = urllib3.PoolManager()
myHeaders = urllib3.util.make_headers(basic_auth='my_username:my_password')
http.request('GET', 'http://example.org', headers=myHeaders)

使用这个。这是在 Python3 安装中找到的标准 urllib。工作很好保证。另请参阅要点

import urllib.request
url = 'http://192.168.0.1/'
auth_user="username"
auth_passwd="^&%$$%^"
passman = urllib.request.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, url, auth_user, auth_passwd)
authhandler = urllib.request.HTTPBasicAuthHandler(passman)
opener = urllib.request.build_opener(authhandler)
urllib.request.install_opener(opener)
res = urllib.request.urlopen(url)
res_body = res.read()
print(res_body.decode('utf-8'))

相关内容

  • 没有找到相关文章

最新更新