将.JSON文件从砂纸提取到精确形式,以便与Tipue搜索一起工作



我只是python和scrapy的新手,所以请忍受。我在网站中使用Tipuesearch,我需要以某些.json格式从scrapy中提取刮擦数据,以使搜索工作。JSON文件应该看起来像这样:

{"pages": [
    {"title": "x", "text": "x", "tags": "x", "url": "x"},
    {"title": "x", "text": "x", "tags": "x", "url": "x"},
    {"title": "x", "text": "x", "tags": "x", "url": "x"}
]}

但是经过数小时的测试,我只能将其达到这种格式:

[
{"pages": {"title": "x", "text": "x", "tags": x", "url": "x"}},
{"pages": {"title": "x", "text": "x", "tags": x", "url": "x"}},
{"pages": {"title": "x", "text": "x", "tags": x", "url": "x"}}
]

Tipuesearch无法识别此格式,并且可以防止整个搜索功能工作。我如何将.json文件放入我首先提到的确切格式?我用 scrapy runspider techbbs.py -o test.json -t json命令,我不使用任何管道或项目出口商。

我的蜘蛛py看起来像

# -*- coding: utf-8 -*-
import scrapy
from urllib.parse import urljoin
from scrapy.selector import Selector
class TechbbsItem(scrapy.Item):
    pages = scrapy.Field()
    title = scrapy.Field()
    text= scrapy.Field()
    tags= scrapy.Field()
    url = scrapy.Field()
class TechbbsSpider(scrapy.Spider):
    name = 'techbbs'
    allowed_domains = ['bbs.io-tech.fi']
    start_urls = ['https://bbs.io-tech.fi/forums/prosessorit-emolevyt-ja-muistit.73/?prefix_id=1'
             ]
    def parse(self, response):
        links = response.css('a.PreviewTooltip::attr(href)').extract()
        for l in links:
            url = response.urljoin(l)
            yield scrapy.Request(url, callback=self.parse_product)
    def parse_product(self, response):
        product_title = response.xpath('normalize-space(//h1/span/following-sibling::text())').extract()
        product_text = response.xpath('//b[contains(.,"Hinta:")]/following-sibling::text()[1]').re('([0-9]+)')
        product_tags = response.xpath('//a/span[@itemprop]').extract()
        product_url = response.xpath('//html/head/link[7]/@href').extract()
        items = []
        for title, text, tags, url in zip(product_title, product_text, product_tags, product_url):
            item = TechbbsItem()
            item['pages'] = {}
            item['pages']['title'] = title
            item['pages']['text'] = text
            item['pages']['tags'] = tags
            item['pages']['url'] = url
            items.append(item)
        return items

我会通过编写自定义项目出口商来做到这一点:

  1. 从JSONITEMEXPORTER继承
  2. 覆盖start_exporting()finish_exporting()方法以获取想要的输出(如果您遇到麻烦,请查看来源(
  3. 使用feed_exporters设置激活您的出口商。

最新更新