请帮我破解这个错误消息?-蟒蛇



我用python编写了以下代码,这些代码会转到数组中的url,并找到有关该页面的特定信息——这是一个web scraper。这一个接收一个Reddit线程数组,并输出每个线程的分数。这个程序几乎从不完全执行。通常,在收到下面的错误消息之前,我会经历5次左右的迭代。有人能帮我弄清真相吗?

import urllib2
from bs4 import BeautifulSoup
urls = ['http://www.reddit.com/r/videos/comments/1i12o2/soap_precursor_to_a_lot_of_other_hilarious_shows/', 'http://www.reddit.com/r/videos/comments/1i12nx/kid_reporter_interviews_ryan_reynolds/', 'http://www.reddit.com/r/videos/comments/1i12ml/just_my_two_boys_going_full_derp_shocking_plot/']
for x in urls:
    f = urllib2.urlopen(x)
    data = f.read()
    soup = BeautifulSoup(data)
    span = soup.find('span', attrs={'class':'number'})
    print '{}:{}'.format(x, span.text)

我得到的错误信息是:

Traceback (most recent call last):
  File "C:/Users/jlazarus/Documents/YouTubeparse2.py", line 7, in <module>
    f = urllib2.urlopen(x)
  File "C:Python27liburllib2.py", line 127, in urlopen
    return _opener.open(url, data, timeout)
  File "C:Python27liburllib2.py", line 410, in open
    response = meth(req, response)
  File "C:Python27liburllib2.py", line 523, in http_response
    'http', request, response, code, msg, hdrs)
  File "C:Python27liburllib2.py", line 448, in error
    return self._call_chain(*args)
  File "C:Python27liburllib2.py", line 382, in _call_chain
    result = func(*args)
  File "C:Python27liburllib2.py", line 531, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 429: Unknown

用try-and-except规则忽略以捕获错误,如果您只想跳过错误,这就是您想要的。

import urllib2
from bs4 import BeautifulSoup
urls = ['http://www.reddit.com/r/videos/comments/1i12o2/soap_precursor_to_a_lot_of_other_hilarious_shows/', 'http://www.reddit.com/r/videos/comments/1i12nx/kid_reporter_interviews_ryan_reynolds/', 'http://www.reddit.com/r/videos/comments/1i12ml/just_my_two_boys_going_full_derp_shocking_plot/']
for x in urls:
    try:
        f = urllib2.urlopen(x)
        data = f.read()
        soup = BeautifulSoup(data)
        span = soup.find('span', attrs={'class':'number'})
        print '{}:{}'.format(x, span.text)
    except HTTPError:
        print("HTTP Error, continuing")

最新更新