蟒蛇漂亮的肥皂刮了一页,但没有刮其他类似的页



我试图抓取一个营养网站,下面的代码可以

import requests
from bs4 import BeautifulSoup
import json
import re
page = requests.get("https://nutritiondata.self.com/facts/nut-and-seed-products/3071/1")
soup = BeautifulSoup(page.content, 'html.parser')
scripts = soup.find_all("script")
for script in scripts:
if 'foodNutrients = ' in script.text:
jsonStr = script.text
jsonStr = jsonStr.split('foodNutrients =')[-1]
jsonStr = jsonStr.rsplit('fillSpanValues')[0]
jsonStr = jsonStr.rsplit(';',1)[0]
jsonStr = "".join(jsonStr.split())
valid_json = re.sub(r'([{,:])(w+)([},:])', r'1"2"3', jsonStr)
jsonObj = json.loads(valid_json)
# These are in terms of 100 grams. I also calculated for per serving       
g_per_serv = int(jsonObj['FOODSERVING_WEIGHT_1'].split('(')[-1].split('g')[0])
for k, v in jsonObj.items():
if k == 'NUTRIENT_0':
conv_v = (float(v)*g_per_serv)/100
print ('%s : %s (per 100 grams)   |   %s (per serving %s' %(k, round(float(v)), round(float(conv_v)), jsonObj['FOODSERVING_WEIGHT_1']  ))

但当我尝试在同一域的其他几乎相同的网页上使用它时,它不会。例如,如果我使用

page = requests.get("https://nutritiondata.self.com/facts/vegetables-and-vegetable-products/2383/2")

我得到错误

Traceback (most recent call last):
File "scrape_test_2.py", line 20, in <module>
jsonObj = json.loads(valid_json)
File "/Users/benjamattesjaroen/anaconda3/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/Users/benjamattesjaroen/anaconda3/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/Users/benjamattesjaroen/anaconda3/lib/python3.7/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 1 column 5446 (char 5445)

看看这两个页面的源代码,它们似乎完全相同,因为它们都有

<script type="text/javascript">
<!--
foodNutrients = { NUTRIENT_142: ........

即被刮伤的部分。

我已经看了一整天了,有人知道如何让这个脚本同时适用于两页吗?这里的问题是什么?

我会切换到使用hjson,它允许不带引号的键,只需提取整个foodNutrient的变量并进行解析,而不是一遍又一遍地操作字符串。


您的错误:

目前,由于至少一个源数组中的元素数量长度不同,您的正则表达式出现故障,因此要清除的正则表达式不合适。我们只研究第一次已知的事件。。。

在第一个url中,在使用regex进行清理之前,您有:

aifr:"[ -35, -10 ]"

之后:

"aifr":"[-35,-10]"

在第二秒内,您将从一个不同长度的数组开始:

aifr:"[ 163, 46, 209, 179, 199, 117, 11, 99, 7, 5, 82 ]"

在regex替换之后,而不是:

"aifr":"[ 163, 46, 209, 179, 199, 117, 11, 99, 7, 5, 82 ]"

你有:

"aifr":"[163,"46",209,"179",199,"117",11,"99",7,"5",82]"

即无效的json。没有更精确的键:值对。


Nutshell:

使用hjson更容易。或者适当地更新regex以处理可变长度数组。

import requests, re, hjson
urls = ['https://nutritiondata.self.com/facts/nut-and-seed-products/3071/1','https://nutritiondata.self.com/facts/vegetables-and-vegetable-products/2383/2']
p = re.compile(r'foodNutrients = (.*?);')
with requests.Session() as s:
for url in urls:
r = s.get(url)
jsonObj = hjson.loads(p.findall(r.text)[0])
serving_weight = jsonObj['FOODSERVING_WEIGHT_1']
g_per_serv = int(serving_weight.split('(')[-1].split('g')[0])
nutrient_0 = jsonObj['NUTRIENT_0']
conv_v = float(nutrient_0)*g_per_serv/100
print('%s : %s (per 100 grams)   |   %s (per serving %s' %(nutrient_0, round(float(nutrient_0)), round(float(conv_v)), serving_weight))

最新更新