pytorch 如何释放被segment-anything占用的内存空间?

pkln4tw6  于 2023-04-12  发布在  其他
关注(0)|答案(1)|浏览(309)

我在jupyterlab中运行了这段代码,使用了facebook的segment-anything:

import cv2
import matplotlib.pyplot as plt
from segment_anything import SamAutomaticMaskGenerator, sam_model_registry
import numpy as np
import gc

def show_anns(anns):
    if len(anns) == 0:
        return
    sorted_anns = sorted(anns, key=(lambda x: x['area']), reverse=True)
    ax = plt.gca()
    ax.set_autoscale_on(False)
    polygons = []
    color = []
    for ann in sorted_anns:
        m = ann['segmentation']
        img = np.ones((m.shape[0], m.shape[1], 3))
        color_mask = np.random.random((1, 3)).tolist()[0]
        for i in range(3):
            img[:,:,i] = color_mask[i]
        ax.imshow(np.dstack((img, m*0.35)))

sam = sam_model_registry["default"](checkpoint="VIT_H SAM Model/sam_vit_h_4b8939.pth")
mask_generator = SamAutomaticMaskGenerator(sam)
image = cv2.imread('Untitled Folder/292282 sample.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
sam.to(device='cuda')
masks = mask_generator.generate(image)
print(len(masks))
print(masks[0].keys())
plt.figure(figsize=(20,20))
plt.imshow(image)
show_anns(masks)
plt.axis('off')
plt.show() 
del(masks)
gc.collect()

运行前内存消耗约200 MB,运行完成后内存消耗约3.4GB,即使我关闭笔记本或重新运行此程序,这些内存也不会释放。那么如何解决这个问题呢?

iecba09b

iecba09b1#

事实证明,该代码在GPU上没有清除任何该高速缓存的方式略有缺陷,对此的一个简单解决方案是使用pytorcs torch.cuda.empty_cache()命令在运行新映像之前清除您的Vram,我发现它实际上将生成的嵌入式堆栈在内存中,我甚至在我的16 Gb vram AWS DL机器上耗尽了内存。希望这能有所帮助!
下面是我在automatic_mask_generator_example笔记本中使用它的一个片段示例:

import sys
sys.path.append("..")
from segment_anything import sam_model_registry, SamAutomaticMaskGenerator, SamPredictor
# Free up GPU memory before loading the model
import gc
gc.collect()
torch.cuda.empty_cache()
# -------------------------------------------
sam_checkpoint = "../models/sam_vit_l_0b3195.pth"
model_type = "vit_l"

device = "cuda"

sam = sam_model_registry[model_type](checkpoint=sam_checkpoint)
sam.to(device=device)

mask_generator = SamAutomaticMaskGenerator(sam)

相关问题