302重定向中的Python urllib unicode异常



情况如下:我正在抓取一个网站,页面的url遵循以下模式:

http://www.pageadress/somestuff/ID-HERE/

没有什么不寻常的。我有很多id,我需要刮,他们中的大多数工作正常。但是,该页面以类似门户的方式运行。在浏览器中,当你输入这样的地址时,你会被重定向到:

http://www.pageadress/somestuff/ID-HERE-title_of_subpage

可能有问题的是,有时标题可能包含非ascii字符(约0.01%的情况下),因此(我认为这是问题)我得到例外:

  File "/usr/lib/python3.4/urllib/request.py", line 161, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib/python3.4/urllib/request.py", line 469, in open
    response = meth(req, response)
  File "/usr/lib/python3.4/urllib/request.py", line 579, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python3.4/urllib/request.py", line 501, in error
    result = self._call_chain(*args)
  File "/usr/lib/python3.4/urllib/request.py", line 441, in _call_chain
    result = func(*args)
  File "/usr/lib/python3.4/urllib/request.py", line 684, in http_error_302
    return self.parent.open(new, timeout=req.timeout)
  File "/usr/lib/python3.4/urllib/request.py", line 463, in open
    response = self._open(req, data)
  File "/usr/lib/python3.4/urllib/request.py", line 481, in _open
    '_open', req)
  File "/usr/lib/python3.4/urllib/request.py", line 441, in _call_chain
    result = func(*args)
  File "/usr/lib/python3.4/urllib/request.py", line 1210, in http_open
    return self.do_open(http.client.HTTPConnection, req)
  File "/usr/lib/python3.4/urllib/request.py", line 1182, in do_open
    h.request(req.get_method(), req.selector, req.data, headers)
  File "/usr/lib/python3.4/http/client.py", line 1088, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python3.4/http/client.py", line 1116, in _send_request
    self.putrequest(method, url, **skips)
  File "/usr/lib/python3.4/http/client.py", line 973, in putrequest
    self._output(request.encode('ascii'))
UnicodeEncodeError: 'ascii' codec can't encode characters in position 38-39: ordinal not in range(128).

奇怪的是,我重定向到的url中没有一个unicode字符实际上位于38-39号位置,但是在其他位置上有。

使用的代码:

import socket
import urllib.parse
import urllib.request
socket.setdefaulttimeout(30)
url = "https://www.bettingexpert.com/archive/tip/3207221"
headers = {'User-Agent': 'Mozilla/5.0'}
content = urllib.request.urlopen(urllib.request.Request(url, None, headers)).read().decode('utf-8')

有没有办法绕过它,最好不使用其他库?

//哦,python的辉煌世界,创造了成千上万的问题,我甚至认为如果我用ruby编写是不可能的。

所以,我找到了解决我的具体问题的方法。我刚刚从他们的api中收集了"url"的剩余部分,经过一些小的转换,我可以访问没有任何重定向的页面。当然,这并不意味着我解决了一般的问题——以后可能还会出现,所以我已经开发了一个"解决方案"。

把这段代码贴在这里,我基本上是向自己保证,我永远不会被聘为程序员,所以不要在吃饭的时候看它。

"水豚"宝石和恶作剧鬼需要,为什么不呢?

#test.py
import socket
import urllib.parse
import urllib.request
import os
tip_id = 3207221
socket.setdefaulttimeout(30)
url = "http://www.bettingexpert.com/archive/tip/" + tip_id.__str__()
headers = {'User-Agent': 'Mozilla/5.0'}
try:
    content = urllib.request.urlopen(urllib.request.Request(url, None, headers)).read().decode('utf-8')
except UnicodeEncodeError:
    print("Overkill activated")
    os.system('ruby test.rb ' + tip_id.__str__())
    with open(tip_id.__str__(), 'r') as file:
        content = file.read()
    os.remove(tip_id.__str__())
print(content)

.

#test.rb
require 'capybara'
require 'capybara/dsl'
require 'capybara/poltergeist'
Capybara.register_driver :poltergeist_no_timeout do |app|
  driver = Capybara::Poltergeist::Driver.new(app, timeout: 30)
  driver.browser.url_blacklist = %w(
    http://fonts.googleapis.com
    http://html5shiv.googlecode.com
  )
  driver
end
Capybara.default_driver = :poltergeist_no_timeout
Capybara.run_server = false
include Capybara::DSL
begin
  page.reset_session!
  page.visit("http://www.bettingexpert.com/archive/tip/#{ARGV[0]}")
rescue
  retry
end
File.open(ARGV[0], 'w') do |file|
  file.print(page.html)
end

相关内容

  • 没有找到相关文章

最新更新