使用代码中的机械化错误进行网页抓取


import mechanize
br=mechanize.Browser()
r=br.open("http://www.drugs.com/search-wildcard-phonetic.html")
br.select_form(nr=0)
br.form['searchterm']='panadol'
br.submit()
print br.response().read()

error in above code:
Traceback (most recent call last):
  File "mech2.py", line 6, in <module>
    br.submit()
  File "/usr/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 541, in submit
    return self.open(self.click(*args, **kwds))
  File "/usr/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 203, in open
    return self._mech_open(url, data, timeout=timeout)
  File "/usr/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 255, in _mech_open
    raise response
mechanize._response.httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txt

请帮助更正上面的代码

看起来你的代码没有任何问题。您的错误消息

mechanize._response.httperror_seek_wrapper:HTTP 错误 403:机器人不允许请求.txt

看起来您正在违反机器人.txt文件。如果您不想看到此错误消息,请停止滥用此网站,并可能与他们联系以获取可接受的数据使用方式。

最新更新