在网站上抓取使用偏移量而不是页面来更改其内容列表的多个页面的最佳方法是什么



我想抓取这个网站:https://repositori.usu.ac.id/handle/123456789/165?offset=0
当链接上的偏移量设置为0时,内容如下所示:当偏移为0时的网站
当偏移设置为1时,内容如下:当偏移为1时的网站

请注意,列表中最顶端的项目已被删除,并更改为其下方的项目。
网站最多只显示20个列表,每偏移1次,就会从最顶端删除1个列表,并将其替换为其下方。因此,我们需要20个偏移量来完全更改内容列表。

我想做一个网页抓取程序,抓取上述网站。但当我需要抓取20多个不同的列表时,我发现很困难。因为偏移量的工作方式与页面不同,所以在进行多范围刮取时,我总是刮取同一列表的两到三次,这并不理想
这是我的代码:(输入idealy所需的偏移量为0,但它不允许我输入1。我不知道如何将默认偏移量设置为

from unittest import result
import requests
from bs4 import BeautifulSoup
import csv
import urllib3.request
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

fakdep = '165'  
offset = input('Please enter number of offset:')
url = 'https://repositori.usu.ac.id/handle/123456789/{}?offset={}0'.format(fakdep,offset)
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'
}
datas = []
count_offset = 0
for offset in range(1,6):
count_offset+=1
print('Scraping Page:', count_offset)
result = requests.get(url+str(offset), verify=False)
#print(result)
soup = BeautifulSoup(result.text, 'html.parser')
items = soup.find_all('li','ds-artifact-item')
for it in items:
author = it.find('span','author h4').text
title = ''.join(it.find('a',href=True).text.strip().replace('/n', ' '))
year = it.find('span','date').text
abstract = ''.join(it.find('div','artifact-abstract').text.strip().replace('/n', ' '))
link = it.find('a')['href']        
datas.append([author, title, year, abstract, "https://repositori.usu.ac.id"+link])
kepala = ['Author', 'Title', 'Year', 'Abstract', 'Link']
thewriter = csv.writer(open('results/{}_{}.csv'.format(fakdep,offset), 'w', newline=''),delimiter=";")
thewriter.writerow(kepala)
for d in datas: thewriter.writerow(d)

我还没有找到另一种方法来解决这个问题
我感谢您的帮助
谢谢!

只能在for loop内部使用偏移量进行分页.每页,也就是偏移量的增量为20,偏移项总数=598。所以分页逻辑是(0598,20(。您可以使用pandas DataFrame将数据存储为csv格式,因为它更具体,是在本地系统中保存数据的最简单和稳健的方法。

import requests
from bs4 import BeautifulSoup
import urllib3.request
import pandas as pd
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
url = 'https://repositori.usu.ac.id/handle/123456789/165?offset={offset}'
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'
}
datas = []
for offset in range(0,598,20):
result = requests.get(url.format(offset=offset), verify=False)
soup = BeautifulSoup(result.text, 'html.parser')
items = soup.find_all('li','ds-artifact-item')
for it in items:
author = it.find('span','author h4').text
title = ''.join(it.find('a',href=True).text.strip().replace('/n', ' '))
year = it.find('span','date').text
abstract = ''.join(it.find('div','artifact-abstract').text.strip().replace('/n', ' '))
link = it.find('a')['href']        
datas.append([author, title, year, abstract, "https://repositori.usu.ac.id"+link])
kepala = ['Author', 'Title', 'Year', 'Abstract', 'Link']
df = pd.DataFrame(datas,columns=kepala)
df.to_csv('out.csv',index=False)
#print(df)

输出:

Author  ...                                               Link
0     Angin, Tifany Perangin  ...  https://repositori.usu.ac.id/handle/123456789/...
1    Fandya, Rafif Akhmad Al  ...  https://repositori.usu.ac.id/handle/123456789/...
2         Rizkiansyah, Halqi  ...  https://repositori.usu.ac.id/handle/123456789/...
3      Sitompul, Ummi Balqis  ...  https://repositori.usu.ac.id/handle/123456789/...
4       Manalu, Hari Purnomo  ...  https://repositori.usu.ac.id/handle/123456789/...
..                       ...  ...                                                ...
593                 Caroline  ...  https://repositori.usu.ac.id/handle/123456789/...
594              Afif, Ridho  ...  https://repositori.usu.ac.id/handle/123456789/...
595    Putra, M.Ananda Rizki  ...  https://repositori.usu.ac.id/handle/123456789/...
596          Ignasius, Angga  ...  https://repositori.usu.ac.id/handle/123456789/...
597     Tarigan, Modalina Br  ...  https://repositori.usu.ac.id/handle/123456789/...
[598 rows x 5 columns]

相关内容

最新更新