RubyonRails开放URI问题,文件源已损坏



我遇到了一个地狱般的问题。我正在使用ruby on rails:ruby 1.8.7(2011-12-10补丁级别356)轨道2.3.14

我正在下面的地址上尝试一个简单的带有open uri的open:

http://jollymag.net/n/10390-лерни-секс-пози

然而,读取时生成的文件会产生一个奇怪的(断开的)字符串。这也在ruby 1.9.3和rails 3.2.x上进行了测试。

require 'open-uri'
url = 'http://jollymag.net/n/10390-летни-секс-пози-във-водата.html'
url = URI.encode(url)
file = open(url)
doc = file.collect.to_s # <- the document is broken
document = Nokogiri::HTML.parse(doc,nil,"utf8") 
puts document # <- the document after nokogiri has one line of content

我试过Iconv的东西和其他东西,但都不管用。对于这个确切的问题,上面的代码或多或少是一个最小的孤立案例。

我很感激任何帮助,因为我已经试着找出这个bug好几天了。

谨致问候,Yavor

所以这个问题对我来说很棘手。似乎有些服务器只返回gzip ed响应。因此,为了阅读,你当然必须相应地阅读。我决定发布我的整个爬网代码,这样人们就可以找到更完整的解决方案来解决这些问题。这是一个更大阶层的一部分,所以它在很多时候都指自我。

希望它能有所帮助!

  SHINSO_HEADERS = {
    'Accept'          => '*/*',
    'Accept-Charset'  => 'utf-8, windows-1251;q=0.7, *;q=0.6',
    'Accept-Encoding' => 'gzip,deflate',
    'Accept-Language' => 'bg-BG, bg;q=0.8, en;q=0.7, *;q=0.6',
    'Connection'      => 'keep-alive',
    'From'            => 'support@xenium.bg',
    'Referer'         => 'http://svejo.net/',
    'User-Agent'      => 'Mozilla/5.0 (compatible; Shinso/1.0;'
  }
  def crawl(url_address)
    self.errors = Array.new
    begin
      begin
        url_address = URI.parse(url_address)
      rescue URI::InvalidURIError
        url_address = URI.decode(url_address)
        url_address = URI.encode(url_address)
        url_address = URI.parse(url_address)
      end
      url_address.normalize!
      stream = ""
      timeout(10) { stream = url_address.open(SHINSO_HEADERS) }
      if stream.size > 0
        url_crawled = URI.parse(stream.base_uri.to_s)
      else
        self.errors << "Server said status 200 OK but document file is zero bytes."
        return
      end
    rescue Exception => exception
      self.errors << exception
      return
    end
    # extract information before html parsing
    self.url_posted       = url_address.to_s
    self.url_parsed       = url_crawled.to_s
    self.url_host         = url_crawled.host
    self.status           = stream.status
    self.content_type     = stream.content_type
    self.content_encoding = stream.content_encoding
    self.charset          = stream.charset
    if    stream.content_encoding.include?('gzip')
      document = Zlib::GzipReader.new(stream).read
    elsif stream.content_encoding.include?('deflate')
      document = Zlib::Deflate.new().deflate(stream).read
    #elsif stream.content_encoding.include?('x-gzip') or
    #elsif stream.content_encoding.include?('compress')
    else
      document = stream.read
    end
    self.charset_guess    = CharGuess.guess(document)
    if not self.charset_guess.blank? or
       not self.charset_guess == 'utf-8' or
       not self.charset_guess == 'utf8'
      document = Iconv.iconv("UTF-8", self.charset_guess , document).to_s
    end
    document = Nokogiri::HTML.parse(document,nil,"utf8")
    document.xpath('//script').remove
    document.xpath('//SCRIPT').remove
    for item in document.xpath('//*[translate(@src, "ABCDEFGHIJKLMNOPQRSTUVWXYZ", "abcdefghijklmnopqrstuvwxyz")]')
      item.set_attribute('src',make_absolute_address(item['src']))
    end
    document = document.to_s.gsub(/<!--(.|s)*?-->/,'')
    #document = document.to_s.gsub(/<![ rnt]*(--([^-]|[rn]|-[^-])*--[ rnt]*)>/,'')
    self.content = Nokogiri::HTML.parse(document,nil,"utf8")
  end

最新更新