如何从网站地图中加载每个url并收集更多数据



我正在尝试加载在某个网站的网站地图上找到的所有链接,然后加载所有这些链接并获取更多数据(库存、大小及其ID(。到目前为止,代码找到了所有链接,并将其转换为.json,但当它加载每个网站并收集更多数据时,它只会对网站地图中不同的最后一个链接进行操作。我需要它来做它在网站地图中的所有链接。如果有人能帮我做这件事,那就太棒了!

谢谢:(

def check_endpoint():
url = 'https://shopnicekicks.com/sitemap_products_1.xml'
page = requests.get(url)
soup = BeautifulSoup(page.text,'lxml')
for url in soup.find_all('loc'): #load the url and find all product links.
produrl = url.text
UrlDB.append(produrl)
endpoint = produrl + '.json' #take a product links and convert to .json
JsonUrl = endpoint 
#load each product link and find variants.
req = requests.get(JsonUrl)
reqJson = json.loads(req.text) 
CartLink = JsonUrl.split("/")[2]    
CartLink = "https://{}".format(CartLink)
for product in reqJson['product']['variants']:
Variants = product['id']
Size = product['title']
Price = product['price']                   
Stock = product['inventory_quantity']
atclink = "Size = {}, Stock = {}, Link = {}, /cart/{}:1 ".format(Size, Stock, CartLink, Variants)
print (atclink) #print all variants 

return

看到示例中间的注释了吗?将该注释及其后的所有内容缩进4个空格。

最新更新