Python Scrapy - 根据第一个网页为每个条目保存'category'



我正在搜寻BBC的食谱。逻辑如下:

主页上有大约20种美食
->在每道菜中,每个字母通常有大约20个食谱,1-3页
->在每个食谱中,我刮了大约6样东西(配料、评级等(

因此,我的逻辑是:进入主页,创建请求,提取所有烹饪链接,然后关注每个,从那里提取每个食谱页面,关注每个食谱链接,最后从每个食谱中获取所有数据。注意,这还没有完成,因为我需要实现spider来遍历所有字母。

我希望有一个"类别"栏,即";非洲美食"链接有一个栏,上面写着";非洲";,对于来自";意大利菜"一个";意大利语";所有列中的条目等

期望结果:

cook_time  prep_time  name  cuisine
10         30         A      italian
20         10         B      italian
30         20         C      indian
20         10         D      indian
30         20         E      indian

下面是我的蜘蛛:

import scrapy
from recipes_cuisines.items import RecipeItem
class ItalianSpider(scrapy.Spider):

name = "italian_spider"

def start_requests(self):
start_urls =  ['https://www.bbc.co.uk/food/cuisines']
for url in start_urls:
yield scrapy.Request(url = url, callback = self.parse_cuisines)

def parse_cuisines(self, response):
cuisine_cards = response.xpath('//a[contains(@class,"promo__cuisine")]/@href').extract()
for url in cuisine_cards:
yield response.follow(url = url, callback = self.parse_main)

def parse_main(self, response):
recipe_cards = response.xpath('//a[contains(@class,"main_course")]/@href').extract()
for url in recipe_cards:
yield response.follow(url = url, callback = self.parse_card)
next_page = response.xpath('//div[@class="pagination gel-wrap"]/ul[@class="pagination__list"]/li[@class="pagination__list-item pagination__priority--0"]/a[@class="pagination__link gel-pica-bold"]/@href').get()
if next_page is not None:
next_page_url = response.urljoin(next_page)
print(next_page_url)
yield scrapy.Request(url = next_page_url, callback = self.parse_main)
def parse_card(self, response):
item = RecipeItem()
item['name'] = response.xpath('//h1[contains(@class,"title__text")]/text()').extract()
item['prep_time'] = response.xpath('//div[contains(@class,"recipe-metadata-wrap")]/p[@class="recipe-metadata__prep-time"]/text()').extract_first()
item['cook_time'] = response.xpath('//p[contains(@class,"cook-time")]/text()').extract_first()
item['servings'] = response.xpath('//p[contains(@class,"serving")]/text()').extract_first()
item['ratings_amount'] = response.xpath('//div[contains(@class="aggregate-rating")]/span[contains(@class="aggregate-rating__total")]/text()').extract()
#item['ratings_amount'] = response.xpath('//*[@id="main-content"]/div[1]/div[4]/div/div[1]/div/div[1]/div[2]/div[1]/span[2]/text()').extract()
item['ingredients'] = response.css('li.recipe-ingredients__list-item > a::text').extract()
return item

和项目:

import scrapy

class RecipeItem(scrapy.Item):
name = scrapy.Field()
prep_time = scrapy.Field()
cook_time = scrapy.Field()
servings = scrapy.Field()
ratings_amount = scrapy.Field()
rating = scrapy.Field()
ingredients = scrapy.Field()
cuisine = scrapy.Field()

注意:我通过保存输出

scrapy crawl italian_spider -o test.csv

我已经阅读了文档并尝试了一些方法,例如将提取的美食添加到parse_cuisine或parse_main方法中,但都会产生错误。

这里有两种方法。最常见的方法是将一些信息从一个页面传递到另一个页面,即在scrapy.Request:中使用cb_kwargs

def parse_cousine(self, response):
cousine = response.xpath('//h1/text()').get()
for recipe_url in response.xpath('//div[@id="az-recipes--recipes"]//a[.//h3]').getall():
yield scrapy.Request(
url=response.urljoin(recipe_url),
callback=self.parse_recipe,
cb_kwargs={'cousine': cousine},
)
def parse_recipe(self, response, cousine):
print(cousine)

但在这个网站上,你可以在食谱页面上找到它(在解析JSON后的配料部分(:

def parse_recipe(self, response):
recipe_raw = response.xpath('//script[@type="application/ld+json"][contains(., '"@type":"Recipe"')]/text()').get()
recipe = json.loads(recipe_raw)
cousine = recipe['recipeCuisine']

Update此XPath'//script[@type="application/ld+json"][contains(., '"@type":"Recipe"')]/text()'查找具有值为application/ld+jsontype属性的script节点,并在该节点的文本中包含字符串"@type":"Recipe"

最新更新