这是views.py文件:-
from django.shortcuts import render
from .models import News
from django.core.paginator import Paginator
from django.db.models import Q
# For scraping part
import requests
from bs4 import BeautifulSoup
def news_list(request, *args, **kwargs):
# fOR scraping part - START::::::::::::::::::::::::::::::::::::::::::::::::::::::::
response = requests.get("http://www.iitg.ac.in/home/eventsall/events")
soup = BeautifulSoup(response.content,"html.parser")
cards = soup.find_all("div", attrs={"class": "newsarea"})
iitg_title = []
iitg_date = []
iitg_link = []
for card in cards[0:6]:
iitg_date.append(card.find("div", attrs={"class": "ndate"}).text)
iitg_title.append(card.find("div", attrs={"class": "ntitle"}).text.strip())
iitg_link.append(card.find("div", attrs={"class": "ntitle"}).a['href'])
# fOR scraping part - END::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# fOR storing the scraped data directly into the dtatbase from the views.py file - START---------------------------------------------------------------
for i in range(len(iitg_title)):
News.objects.create(title = iitg_title[i], datess = iitg_date[i], linkss = iitg_link[i])
# fOR storing the scraped data directly into the dtatbase from the views.py file - END-----------------------------------------------------------------
queryset = News.objects.all() #Getting all the objects from the database
search_query = request.GET.get('q')
if search_query:
queryset = queryset.filter(
Q(title__icontains = search_query) |
Q(description__icontains = search_query)
)
paginator = Paginator(queryset, 5) #Adding pagination
page_number = request.GET.get('page')
queryset = paginator.get_page(page_number)
context = {
'object_list': queryset
}
return render(request, 'news_list.html', context)
我尝试过不同的方法,比如在我的model.py文件中添加unique=True
,但仍然不起作用,会引发一些错误。我无法添加或计算要添加到views.py文件中的代码的确切条件行,以便检查数据是否已经存在。在数据库中,如果不存在,则添加刮取的数据,否则不进行注释。
我认为您可以使用get_or_create
,例如News.objects.get_or_create(title=XYZ(,并使title
字段唯一。