身份验证背后的网络爬虫



我是一名新手程序员,试图通过自动将.ict文件转换为.csv文件来加快数据分析过程。

我正在尝试创建一个Python程序,可以轻松地将NASA地球数据网站上的.ict文件转换为.csv文件进行数据分析。我计划通过创建一个数据抓取器来访问这些文件,但它们位于用户身份验证墙后面。我计划访问的数据集可在以下链接中找到:https://asdc.larc.nasa.gov/data/AJAX/O3_1/2018/02/28/AJAX-O3_ALPHA_20180228_R1_F220.ict

这是我从中收集的代码https://curlconverter.com/#并且被添加以将数据发送到"0";登录";我的会话:

import requests
from bs4 import BeautifulSoup
cookies = {
'_ga': '',
'_gid': '',
'_gat_GSA_ENOR0': '1',
'_gat_UA-62340125-1': '1',
'_gat_eui_tracker': '1',
'_gat_UA-50960810-3': '1',
'_urs-gui_session': '',
'_gat_UA-62340125-2': '1',
}
headers = {
'Connection': 'keep-alive',
'Cache-Control': 'max-age=0',
'sec-ch-ua': '" Not A;Brand";v="99", "Chromium";v="96", "Google Chrome";v="96"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'Upgrade-Insecure-Requests': '1',
'Origin': 'https://urs.earthdata.nasa.gov',
'Content-Type': 'application/x-www-form-urlencoded',
'User-Agent': '',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Sec-Fetch-Site': 'same-origin',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-User': '?1',
'Sec-Fetch-Dest': 'document',
'Referer': 'https://urs.earthdata.nasa.gov/oauth/authorize?response_type=code&client_id=OLpAZlE4HqIOMr0TYqg7UQ&redirect_uri=https%3A%2F%2Fd53njncz5taqi.cloudfront.net%2Furs_callback&state=https%3A%2F%2Fsearch.earthdata.nasa.gov%2Fsearch%3Fee%3Dprod',
'Accept-Language': 'en-US,en;q=0.9',
}
data = {
'utf8': '',
'authenticity_token': '',
'username': '',
'password': '',
'client_id': '',
'redirect_uri': '',
'response_type': 'code',
'state': 'https://search.earthdata.nasa.gov/search?ee=prod',
'stay_in': '1',
'commit': 'Log in'
}
response = requests.post('https://urs.earthdata.nasa.gov/login', headers=headers, cookies=cookies, data=data)

s = requests.Session()
s.post('https://urs.earthdata.nasa.gov/login', headers=headers, cookies=cookies, data=data)
response = s.get("https://asdc.larc.nasa.gov/data/AJAX/O3_1/2018/02/28/AJAX-O3_ALPHA_20180228_R1_F220.ict")
response
result = requests.get('https://asdc.larc.nasa.gov/data/AJAX/O3_1/2018/02/28/AJAX-O3_ALPHA_20180228_R1_F220.ict')
result.status_code
result.headers
content = result.content
soup = BeautifulSoup(content, features='lxml')
print(soup.prettify())

这个打印函数引导我进入登录页面的HTML代码。有人知道如何通过Python访问登录另一端的数据吗?

data中缺少一些东西,如authenticity_token的值和state的编码值。以下是我的操作方法。在执行脚本之前,请确保相应地填写usernamepassword字段。

import requests
from bs4 import BeautifulSoup
url = 'https://urs.earthdata.nasa.gov/oauth/authorize?splash=false&client_id=iQGRa5KtDl_e-fgYqB5x5Q&response_type=code&redirect_uri=https%3A%2F%2Fasdc.larc.nasa.gov%2Fdata%2Furs&state=aHR0cDovL2FzZGMubGFyYy5uYXNhLmdvdi9kYXRhL0FKQVgvTzNfMS8yMDE4LzAyLzI4L0FKQVgtTzNfQUxQSEFfMjAxODAyMjhfUjFfRjIyMC5pY3Q'
link = 'https://urs.earthdata.nasa.gov/login'
with requests.Session() as s:
s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'
r = s.get(url)
soup = BeautifulSoup(r.text,"lxml")
payload = {i['name']:i.get('value','') for i in soup.select('input[name]')}
payload['username'] = 'your_username'
payload['password'] = 'your_password'
res = s.post(link,data=payload)
print(res.text)

最新更新