我有从网页中提取数据的麻烦。我遇到的一个问题是,它拉了很多空白,我选择按照其他人的建议使用.strip()。通过我遇到了一个问题
if a.strip():
print a
if b.strip():
print b
退货:
a1
b1
.
.
.
但是这个:
if a.strip():
aList.append(a)
if b.strip():
bList.append(b)
print aList, bList
返回此:
a1
b1
我试图在这里模拟我用.strip()删除的空白,但你明白了。无论出于何种原因,它都会在列表中添加空白,尽管我告诉它不要这样做。我甚至可以在if语句中打印列表,它也会正确显示,但无论出于何种理由,当我决定在if语句之外打印时,它都无法按我的预期工作。
这是我的全部代码:
# coding: utf-8
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.exporter import CsvItemExporter
import re
import csv
import urlparse
from stockscrape.items import EPSItem
from itertools import izip
class epsScrape(BaseSpider):
name = "eps"
allowed_domains = ["investors.com"]
ifile = open('test.txt', "r")
reader = csv.reader(ifile)
start_urls = []
for row in ifile:
url = row.replace("n","")
if url == "symbol":
continue
else:
start_urls.append("http://research.investors.com/quotes/nyse-" + url + ".htm")
ifile.close()
def parse(self, response):
f = open("eps.txt", "a+")
sel = HtmlXPathSelector(response)
sites = sel.select("//div")
# items = []
for site in sites:
symbolList = []
epsList = []
item = EPSItem()
item['symbol'] = site.select("h2/span[contains(@id, 'qteSymb')]/text()").extract()
item['eps'] = site.select("table/tbody/tr/td[contains(@class, 'rating')]/span/text()").extract()
strSymb = str(item['symbol'])
newSymb = strSymb.replace("[]","").replace("[u'","").replace("']","")
strEps = str(item['eps'])
newEps = strEps.replace("[]","").replace(" ","").replace("[u'\r\n","").replace("']","")
if newSymb.strip():
symbolList.append(newSymb)
# print symbolList
if newEps.strip():
epsList.append(newEps)
# print epsList
print symbolList, epsList
for symb, eps in izip(symbolList, epsList):
f.write("%st%sn", (symb, eps))
f.close()
strip
不会就地修改字符串。它返回一个去掉空白的新字符串。
>>> a = ' foo '
>>> b = a.strip()
>>> a
' foo '
>>> b
'foo'
我弄清楚是什么导致了混乱。它是我声明变量/列表的位置。我在for循环中声明了它,所以每次它迭代时都会重写,而空列表或变量与if语句的false结果相同。