BeautifulSoup从html页面上的列表元素中过滤数据



我正试图从多个html页面收集数据,特别是列表元素中的数据。我试图将这些数据添加到字典中以备以后使用,我按照预期提取了数据,但我将数据输入到字典中并没有按预期工作。我当前正在覆盖每个条目,而不是添加新条目。有人能指出我哪里出了问题吗?

当前代码

from BeautifulSoup import BeautifulSoup
import requests
import re
person_dict = {}
.....
<snip>
<snip>
.....
soup = BeautifulSoup(response.text)
    div = soup.find('div', {'id': 'object-a'})
    ul = div.find('ul', {'id': 'object-a-1'})
    li_a = ul.findAll('a', {'class': 'title'})
    li_p = ul.findAll('p', {'class': 'url word'})
    li_po = ul.findAll('p')
    for a in li_a:
        nametemp = a.text
        name = (nametemp.split(' - ')[0])
        person_dict.update({'Name': name})     #I attempted updating
    for lip in li_p:
        person_dict['url'] = lip.text          #I attempted adding directly
    for email in li_po:   
        reg_emails = re.compile('[a-zA-Z0-9.]*' + '@')        
        person_dict['email'] = reg_emails.findall(email.text)
print person_dict # results in 1 entry being returned

测试数据

<div id="object-a">
    <ul id="object-a-1">
            <li>
              <a href="www.url.com/person" class="title">Person1</a>
              <p class="url word">www.url.com/Person1</p>
              <p>Person 1, some foobar possibly an email@address.com &nbsp;...</p>
            </li>

            <li>
              <a href="www.url.com/person" class="title">Person2</a>
              <p class="url word">www.url.com/Person1</p>
              <p>Person 2, some foobar possibly an email@address.com &nbsp;...</p>
            </li>

            <li>
              <a href="www.url.com/person" class="title">Person3</a>
              <p class="url word">www.url.com/Person1</p>
              <p>Person 3, some foobar, possibly an email@address.com &nbsp;...</p>
            </li>
    </ul>

是否需要使用字典取决于您,但如果您确实选择使用字典,那么最好为每个列表项提供一个单独的字典,而不是为所有条目提供一个字典。

我建议你把所有的条目都存储在一个列表中。下面的代码显示了两个建议,要么使用tuple来存储每个项目的各种信息位,要么使用字典。

如果您只是想显示信息或将其写入文件,tuple解决方案会更快。

# Two possible ways of storing your data: a list of tuples, or a list of dictionaries
entries_tuples = []             
entries_dictionary = []
soup = BeautifulSoup(text)
div = soup.find('div', {'id': 'object-a'})
ul = div.find('ul', {'id': 'object-a-1'})
for li in ul.findAll('li'):
    title = li.find('a', {'class': 'title'})
    url_href = title.get('href')
    person = title.text
    url_word = li.find('p', {'class': 'url word'}).text
    emails = re.findall(r's+(S+@S+)(?:s+|Z)', li.findAll('p')[1].text, re.M)       # allow for multiple emails
    entries_tuples.append((url_href, person, url_word, emails))
    entries_dictionary.append({'url_href' : url_href, 'person' : person, 'url_word' : url_word, 'emails' : emails})
for url_href, person, url_word, emails in entries_tuples:
    print '{:25} {:10} {:25} {}'.format(url_href, person, url_word, emails)
print
for entry in entries_dictionary:
    print '{:25} {:10} {:25} {}'.format(entry['url_href'], entry['person'], entry['url_word'], entry['emails'])

对于您的示例HTML,将显示以下内容:

www.url.com/person        Person1    www.url.com/Person1       [u'email@address.com']
www.url.com/person        Person2    www.url.com/Person1       [u'email@address.com']
www.url.com/person        Person3    www.url.com/Person1       [u'email@address.com', u'email@address.com']
www.url.com/person        Person1    www.url.com/Person1       [u'email@address.com']
www.url.com/person        Person2    www.url.com/Person1       [u'email@address.com']
www.url.com/person        Person3    www.url.com/Person1       [u'email@address.com', u'email@address.com']

请注意,从文本中提取电子邮件地址本身就是一个完整的问题。上面的解决方案可以很容易地匹配那些实际上不是格式良好的电子邮件地址的条目,但在这里就足够了。

你可能走错了路。试试这样的东西:

from BeautifulSoup import BeautifulSoup
import re
text = open('soup.html') # You are opening the file differently
soup = BeautifulSoup(text)
list_items = soup.findAll('li')
people = []
for item in list_items:
    name = item.find('a', {'class': 'title'}).text
    url = item.find('p', {'class': 'url word'}).text
    email_text = item.findAll('p')[1].text
    match = re.search(r'[w.-]+@[w.-]+', email_text)
    email = match.group(0)
    person = {'name': name, 'url': url, 'email': email}
    people.append(person)
print people

最新更新