class Category:
title = models.CharField(max_length=50)
class Tag:
name = models.CharField(max_length=50)
class Video:
video = models.FileField(upload_to='xxx/')
class Exercise:
name = models.CharField(max_length=50)
video = models.ForeignKey(Video)
description = models.CharField(max_length=250)
category = models.ForeignKey(Category, on_delete=models.CASCADE)
tag = models.ManyToManyField(tag, on_delete=models.CASCADE)
class Data:
relation = models.ForeignKey(Relation, on_delete=models.CASCADE)
exercise = models.ForeignKey(Exercise, on_delete=models.CASCADE)
相应地,获取数据的代码花费了太多时间,我该如何减少这种情况,或者有什么方法来处理这种情况
for each in Data:
sam.append(
{
"name": each.exercise.name,
"url": each.exercise.video.video.url,
"description": each.exercise.description,
"category": each.exercise.category.title,
"tag": each.exercise.tag.name
}
)
您需要使用select_related
。。[Django-doc]和prefetch_related
。。Django-doc]来减少for循环正在执行的查询,否则它将为每个Data
对象执行大约4个额外的查询。
所以你可以这样做:
for each in Data.objects.select_related(
"exercise__video",
"exercise__category",
).prefetch_related(
"exercise__tag",
).all():
sam.append(
{
"name": each.exercise.name,
"url": each.exercise.video.video.url,
"description": each.exercise.description,
"category": each.exercise.category.title,
"tags": [t.name for t in each.exercise.tag.all()]
}
)
很多时候不是数据库慢,而是你发送给它的查询。在处理Postgres这样的大型数据库时尤其如此。如果您有一个返回500万行的查询,并且在计算机上运行需要0.1秒,那么当您在数据库服务器上运行它时,可能需要100倍的时间。尽可能避免在Python中进行数据库读取,而在SQL中执行所有操作(只需将结果传递给Python(。创建包含所有字段的视图:
CREATE OR REPLACE VIEW all_exercises AS SELECT exercise.*, data.* FROM exercise JOIN data ON data.exercise_id = exercise.id;
SELECT * FROM all_exercises WHERE id = 1234; // this will run instantly even though the initial query was slow;