pytorch 梯度摄像头总是将热图放置在同一区域

bjp0bcyl  于 2023-06-23  发布在  其他
关注(0)|答案(1)|浏览(108)

下面是我的代码中与该问题相关的部分:

def forward_hook(module,input,output):
    activation.append(output)

def backward_hook(module,grad_in,grad_out):
    grad.append(grad_out[0])

model.layer4[-1].register_forward_hook(forward_hook)
model.layer4[-1].register_backward_hook(backward_hook)
grad=[]
activation=[]

loader_iter = iter(dataloader_test)
for _ in range(50):
    data, target, meta = next(loader_iter)       
    count1 = 0
    for d, t, m in zip(data, target, meta):

        hm_dogs = []
        heatmap = []
        d, t = map(lambda x: x.to(device), (d, t))
        
        #remove batch size
        d = d.unsqueeze(0)
        output = model(d)

        output[:, 4].backward()
        #get the gradients and activations collected in the hook
        grads=grad[count1].cpu().data.numpy().squeeze()
        fmap=activation[count1].cpu().data.numpy().squeeze()

我打印的毕业生和他们都看起来一样,尽管迭代。有人能给我点建议吗?

tag5nh1u

tag5nh1u1#

看起来像是在为循环的每次迭代累积梯度和激活。清除gradactivation列表在每次迭代开始时,就在内部循环之前。

loader_iter = iter(dataloader_test)
for _ in range(50):
    grad.clear()
    activation.clear()

    data, target, meta = next(loader_iter)
    count1 = 0
    for d, t, m in zip(data, target, meta):
        hm_dogs = []
        heatmap = []
        d, t = map(lambda x: x.to(device), (d, t))

        # remove batch size
        d = d.unsqueeze(0)
        output = model(d)

        output[:, 4].backward()
        # get the gradients and activations collected in the hook
        grads = grad[count1].cpu().data.numpy().squeeze()
        fmap = activation[count1].cpu().data.numpy().squeeze()

相关问题