Scrapy crawler总是停在第1000个项目



我的scrapy spider总是在Kubernetes pod中的第1000个请求时停止。我没发现任何问题。它只是关闭我的蜘蛛。

我已经在本地测试了终端和docker,没有问题。

请帮我处理一下。

2021-09-23 09:36:41 [scrapy.core.engine] INFO: Closing spider (finished)
2021-09-23 09:36:41 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 360643,
'downloader/request_count': 1003,
'downloader/request_method_count/GET': 1000,
'downloader/request_method_count/POST': 3,
'downloader/response_bytes': 2597069,
'downloader/response_count': 1003,
'downloader/response_status_count/200': 1000,
'downloader/response_status_count/404': 3,
'elapsed_time_seconds': 85.16985,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2021, 9, 23, 9, 36, 41, 366720),
'httpcompression/response_bytes': 4896324,
'httpcompression/response_count': 997,
'item_scraped_count': 1000,
'log_count/DEBUG': 5137,
'log_count/INFO': 3016,
'log_count/WARNING': 1,
'memusage/max': 111157248,
'memusage/startup': 92839936,
'request_depth_max': 1,
'response_received_count': 1003,
'scheduler/dequeued': 1006,
'scheduler/dequeued/memory': 1006,
'scheduler/enqueued': 1006,
'scheduler/enqueued/memory': 1006,
'splash/render.html/request_count': 3,
'splash/render.html/response_count/200': 3,
'start_time': datetime.datetime(2021, 9, 23, 9, 35, 16, 196870)}
2021-09-23 09:36:41 [scrapy.core.engine] INFO: Spider closed (finished)

>状态通常意味着工作运行得很好。但是,有些站点对分页和/或搜索中显示的项目有严格的限制。你能在浏览器中找到第1001个条目吗?

最新更新