我的第一个Scrapy Spider不适用于MySQL数据库



我是网络抓取的新手,我的抓取代码不起作用,我不知道!我想抓取这个网站(http://quotes.toscrape.com(,然后将数据保存到MySQL数据库中。所以,我设计了一个基本的蜘蛛:

import scrapy
from ..items import QuotetutorialItem
class QuoteSpider(scrapy.Spider) :
    name = 'quotes'
    start_urls = [
        'http://quotes.toscrape.com/'
    ]
    def parse(self, response) :
        items = QuotetutorialItem()
        all_div_quotes = response.css('div.quote')
        for quotes in all_div_quotes:
            title = quotes.css('span.text::text').extract()
            author = quotes.css('.author::text').extract()
            tag = quotes.css('.tag::text').extract()
            items['title'] = title
            items['author'] = author
            items['tag'] = tag
            yield items

这是我的"pipelines.py"代码:

# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
# Scraping data - > Item Containers - > Json/csv files
# Scraping data - > Item Containers - > Pipeline - > SQL/Mongo database
import mysql.connector
class QuotetutorialPipeline(object):
    def __int__(self):
        self.create_connection()
        self.create_table()
    def create_connection(self):
        self.conn = mysql.connector.connect(
                host = 'localhost',
                user = 'root',
                passwd = 'jozefleonel',
                database = 'myquotes'
            )
        self.curr = self.conn.cursor()
    def create_table(self):
        self.curr.execute("""DROP TABLE IF EXISTS quotes_tb""")
        self.curr.execute("""create table quotes_tb(
                        title text,
                        author text,
                        tag text
                        )""")
    def process_item(self, item, spider):
        self.store_db(item)
        return item
    def store_db(self,item):
        self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
            item['title'][0],
            item['author'][0],
            item['tag'][0]
            ))
        self.conn.commit()

您在回复中找到错误消息并感谢^^

所有消息错误:

(ScrapyTutorial) D:ScrapyTutorialquotetutorial>scrapy crawl quotes
2019-06-21 14:43:36 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: quotetut
orial)
2019-06-21 14:43:36 [scrapy.utils.log] INFO: Versions: lxml 4.3.4.0, libxml2 2.9
.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.1, Python 3.6.6 (v
3.6.6:4cf1f54eb7, Jun 27 2018, 03:37:03) [MSC v.1900 64 bit (AMD64)], pyOpenSSL
19.0.0 (OpenSSL 1.1.1c  28 May 2019), cryptography 2.7, Platform Windows-8.1-6.3
.9600-SP0
2019-06-21 14:43:36 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'qu
otetutorial', 'NEWSPIDER_MODULE': 'quotetutorial.spiders', 'ROBOTSTXT_OBEY': Tru
e, 'SPIDER_MODULES': ['quotetutorial.spiders']}
2019-06-21 14:43:37 [scrapy.extensions.telnet] INFO: Telnet Password: e7bf79ce64
7de417
2019-06-21 14:43:37 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2019-06-21 14:43:45 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-06-21 14:43:45 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-06-21 14:43:48 [scrapy.middleware] INFO: Enabled item pipelines:
['quotetutorial.pipelines.QuotetutorialPipeline']
2019-06-21 14:43:48 [scrapy.core.engine] INFO: Spider opened
2019-06-21 14:43:48 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pag
es/min), scraped 0 items (at 0 items/min)
2019-06-21 14:43:48 [scrapy.extensions.telnet] INFO: Telnet console listening on
 127.0.0.1:6023
2019-06-21 14:43:49 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://quotes
.toscrape.com/robots.txt> (referer: None)
2019-06-21 14:43:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes
.toscrape.com/> (referer: None)
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['A
lbert Einstein'],
 'tag': ['change', 'deep-thoughts', 'thinking', 'world'],
 'title': ['"The world as we have created it is a process of our thinking. It '
           'cannot be changed without changing our thinking."']}
Traceback (most recent call last):
  File "d:scrapytutoriallibsite-packagestwistedinternetdefer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['J
.K. Rowling'],
 'tag': ['abilities', 'choices'],
 'title': ['"It is our choices, Harry, that show what we truly are, far more '
           'than our abilities."']}
Traceback (most recent call last):
  File "d:scrapytutoriallibsite-packagestwistedinternetdefer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['A
lbert Einstein'],
 'tag': ['inspirational', 'life', 'live', 'miracle', 'miracles'],
 'title': ['"There are only two ways to live your life. One is as though '
           'nothing is a miracle. The other is as though everything is a '
           'miracle."']}
Traceback (most recent call last):
  File "d:scrapytutoriallibsite-packagestwistedinternetdefer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['J
ane Austen'],
 'tag': ['aliteracy', 'books', 'classic', 'humor'],
 'title': ['"The person, be it gentleman or lady, who has not pleasure in a '
           'good novel, must be intolerably stupid."']}
Traceback (most recent call last):
  File "d:scrapytutoriallibsite-packagestwistedinternetdefer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['M
arilyn Monroe'],
 'tag': ['be-yourself', 'inspirational'],
 'title': [""Imperfection is beauty, madness is genius and it's better to be "
           'absolutely ridiculous than absolutely boring."']}
Traceback (most recent call last):
  File "d:scrapytutoriallibsite-packagestwistedinternetdefer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['A
lbert Einstein'],
 'tag': ['adulthood', 'success', 'value'],
 'title': ['"Try not to become a man of success. Rather become a man of '
           'value."']}
Traceback (most recent call last):
  File "d:scrapytutoriallibsite-packagestwistedinternetdefer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['A
ndré Gide'],
 'tag': ['life', 'love'],
 'title': ['"It is better to be hated for what you are than to be loved for '
           'what you are not."']}
Traceback (most recent call last):
  File "d:scrapytutoriallibsite-packagestwistedinternetdefer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['T
homas A. Edison'],
 'tag': ['edison', 'failure', 'inspirational', 'paraphrased'],
 'title': [""I have not failed. I've just found 10,000 ways that won't work.""]}
Traceback (most recent call last):
  File "d:scrapytutoriallibsite-packagestwistedinternetdefer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['E
leanor Roosevelt'],
 'tag': ['misattributed-eleanor-roosevelt'],
 'title': ['"A woman is like a tea bag; you never know how strong it is until '
           "it's in hot water.""]}
Traceback (most recent call last):
  File "d:scrapytutoriallibsite-packagestwistedinternetdefer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.scraper] ERROR: Error processing {'author': ['S
teve Martin'],
 'tag': ['humor', 'obvious', 'simile'],
 'title': ['"A day without sunshine is like, you know, night."']}
Traceback (most recent call last):
  File "d:scrapytutoriallibsite-packagestwistedinternetdefer.py", line 654
, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 39, in
 process_item
    self.store_db(item)
  File "D:ScrapyTutorialquotetutorialquotetutorialpipelines.py", line 43, in
 store_db
    self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
AttributeError: 'QuotetutorialPipeline' object has no attribute 'curr'
2019-06-21 14:43:50 [scrapy.core.engine] INFO: Closing spider (finished)
2019-06-21 14:43:50 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 446,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 2701,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/404': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 6, 21, 12, 43, 50, 376034),
 'log_count/DEBUG': 2,
 'log_count/ERROR': 10,
 'log_count/INFO': 9,
 'response_received_count': 2,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/404': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2019, 6, 21, 12, 43, 48, 610377)}
2019-06-21 14:43:50 [scrapy.core.engine] INFO: Spider closed (finished)

我创立了解决方案x(它是init而不是int,我忘记了"i"x((

在你正在调用的流程项中

self.store_db(项目(

store_db正在尝试使用数据库对象 curr。而且它没有在管道的任何位置定义。

我想这是你应该做的

class QuotetutorialPipeline(object):
    def __init__(self):
        self.curr,self.conn = self.create_connection()
        self.curr = self.create_table(self.curr)
    def create_connection(self):
        conn = mysql.connector.connect(
                host = 'localhost',
                user = 'root',
                passwd = 'jozefleonel',
                database = 'myquotes'
            )
        return conn.cursor(),conn
    def create_table(self,curr):
        curr.execute("""DROP TABLE IF EXISTS quotes_tb""")
        curr.execute("""create table quotes_tb(
                        title text,
                        author text,
                        tag text
                        )""")
        return curr
    def process_item(self, item, spider):
        self.store_db(item)
        return item
    def store_db(self,item):
        self.curr.execute("""insert into quotes_tb values (%s,%s,%s)""", (
            item['title'][0],
            item['author'][0],
            item['tag'][0]
            ))
        self.conn.commit()

我们将从create_connection返回游标和连接对象,从create_table返回游标对象。

我们现在可以在store_db上使用它。

最新更新