如何释放被任何段占用的内存空间?



我在jupyterlab中运行这段代码,使用facebook的segment-anything:

import cv2
import matplotlib.pyplot as plt
from segment_anything import SamAutomaticMaskGenerator, sam_model_registry
import numpy as np
import gc
def show_anns(anns):
if len(anns) == 0:
return
sorted_anns = sorted(anns, key=(lambda x: x['area']), reverse=True)
ax = plt.gca()
ax.set_autoscale_on(False)
polygons = []
color = []
for ann in sorted_anns:
m = ann['segmentation']
img = np.ones((m.shape[0], m.shape[1], 3))
color_mask = np.random.random((1, 3)).tolist()[0]
for i in range(3):
img[:,:,i] = color_mask[i]
ax.imshow(np.dstack((img, m*0.35)))

sam = sam_model_registry["default"](checkpoint="VIT_H SAM Model/sam_vit_h_4b8939.pth")
mask_generator = SamAutomaticMaskGenerator(sam)
image = cv2.imread('Untitled Folder/292282 sample.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
sam.to(device='cuda')
masks = mask_generator.generate(image)
print(len(masks))
print(masks[0].keys())
plt.figure(figsize=(20,20))
plt.imshow(image)
show_anns(masks)
plt.axis('off')
plt.show() 
del(masks)
gc.collect()

运行前内存消耗约200MB,运行完成后内存消耗约3.4GB,即使我关闭笔记本或者重新运行这个程序,这些内存也不会被释放。那么如何解决这个问题呢?

事实证明,代码在不清除GPU上的任何缓存的方式上略有缺陷,一个简单的修复方法是使用pytorchestorch.cuda.empty_cache()命令在为新图像运行之前清理您的Vram,我发现它实际上是将生成的嵌入堆栈在内存中,我甚至最终在我的16Gb Vram AWS DL机器上耗尽内存。希望这对你有帮助!

下面是我如何在他们的automatic_mask_generator_example笔记本中使用它的示例代码片段:

import sys
sys.path.append("..")
from segment_anything import sam_model_registry, SamAutomaticMaskGenerator, SamPredictor
# Free up GPU memory before loading the model
import gc
gc.collect()
torch.cuda.empty_cache()
# -------------------------------------------
sam_checkpoint = "../models/sam_vit_l_0b3195.pth"
model_type = "vit_l"
device = "cuda"
sam = sam_model_registry[model_type](checkpoint=sam_checkpoint)
sam.to(device=device)
mask_generator = SamAutomaticMaskGenerator(sam)

最新更新