当我们在混乱中发布请求时,我们到底应该通过什么来回应


from scrapy import FormRequest
url = "https://stackoverflow.com/users/login"
fetch(url)
req = FormRequest.from_response(
response,
formid='login-form',
formdata={'email': 'test@test.com',
'password': 'testpw'},
clickdata={'id': 'submit-button'},
)
fetch(req)

在scratchshell中使用上面的代码,我可以登录stackoverflow。但是,我不想将此活动作为命令行参数来执行。所以,我尝试在子流程中使用上面的命令登录。

import subprocess
import scrapy
from scrapy import FormRequest
from subprocess import run
from bs4 import BeautifulSoup
class QuoteSpider(scrapy.Spider):
name = 'stackover'
start_urls = ['https://stackoverflow.com/users/login']
run(["scrapy","fetch", start_urls[0]], capture_output=True, text=True)
def parse(self, response):
req = FormRequest.from_response(
response,
formid='login-form',
formdata={'email': 'test@test.com',
'password': 'testpw'},
clickdata={'id': 'submit-button'},
)
run(["scrapy","fetch", req], shell=True)

但它给了我这样的错误:

TypeError:"FormRequest"类型的参数不可迭代

我还试图将响应保存在html文件中,并将该文件作为响应读取,结果得到了与上面相同的错误消息。

with open("output.html","w") as f:
response = call(["scrapy","fetch", url], stdout=f, shell=True)
with open("output.html", encoding="utf-8") as f:
data = f.read()
response = BeautifulSoup(data, 'lxml')

我也试图得到短信回复,再次收到上述错误消息。

r = run(["scrapy","fetch", start_urls[0]], capture_output=True)
response = r.stdout.decode()

在调用解析函数之前,我也尝试过formrequest,比如:

class QuoteSpider(scrapy.Spider):
name = 'stackover'
start_urls = ['https://stackoverflow.com/users/login']
r = run(["scrapy","fetch", start_urls[0]], capture_output=True)
response = r.stdout.decode()
req = FormRequest.from_response(
response,
formid='login-form',
formdata={'email': 'test@test.com',
'password': 'testpw'},
clickdata={'id': 'submit-button'},
)
run(["scrapy","fetch", req], shell=True)
def parse(self, response):
print(response)

而且,我又犯了新错误。

属性错误:"str"对象没有属性"encoding"

那么,我如何使用子进程运行scrapy shell命令来登录stackoverflow。scratchy中Formrequest中的响应究竟是什么作为输入?

我正在学习scrapy和各种登录stackoverflow的方法来练习网页抓取。

from scrapy import FormRequest
from scrapy import Spider
class StackSpider(Spider):
name = 'stack_spider'
# List of urls for initial requests. Can be one or many.
# Default method parse() is called for start resoponses.
start_urls = ["https://stackoverflow.com/users/login"] 
# Parsing users/login page. Getting form and moving on.
def parse(self, response):
yield FormRequest.from_response(
response,
formid='login-form',
formdata={'email': 'test@test.com',
'password': 'testpw'},
clickdata={'id': 'submit-button'},
callback=self.parse_login
)
# Parsing login result
def parse_login(self, response):
print('Checking logging in here.')

您可以使用scrapy crawl stack_spider在终端中运行此代码

相关内容

最新更新