Pytorch运行时错误:Tensor的元素0不需要grad,并且没有grad_fn

j0pj023g  于 2022-09-18  发布在  Java
关注(0)|答案(6)|浏览(689)

这段代码是这样构建的:我的机器人拍摄一张照片,一些TF计算机视觉模型计算出目标对象在照片中的开始位置。该信息(x1和x2坐标)被传递给pytorch模型。它应该学会预测正确的运动激活,以便更接近目标。执行移动后,机器人再次拍照,tf cv模型应计算电机激活是否使机器人更接近所需的状态(x1在10,x2坐标在31)

然而,每次我运行代码时,pytorch都不能计算渐变。

我想知道这是一个数据类型的问题,还是一个更一般的问题:如果损耗不是直接从pytorch网络的输出计算出来的,就不可能计算出梯度吗?

如有任何帮助和建议,将不胜感激。


# define policy model (model to learn a policy for my robot)

import torch
import torch.nn as nn
import torch.nn.functional as F 
class policy_gradient_model(nn.Module):
    def __init__(self):
        super(policy_gradient_model, self).__init__()
        self.fc0 = nn.Linear(2, 2)
        self.fc1 = nn.Linear(2, 32)
        self.fc2 = nn.Linear(32, 64)
        self.fc3 = nn.Linear(64,32)
        self.fc4 = nn.Linear(32,32)
        self.fc5 = nn.Linear(32, 2)
    def forward(self,x):
        x = self.fc0(x)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        x = F.relu(self.fc4(x))
        x = F.relu(self.fc5(x))
        return x

policy_model = policy_gradient_model().double()
print(policy_model)
optimizer = torch.optim.AdamW(policy_model.parameters(), lr=0.005, betas=(0.9,0.999), eps=1e-08, weight_decay=0.01, amsgrad=False)

# make robot move as predicted by pytorch network (not all code included)

def move(motor_controls):

# define curvature

 #   motor_controls[0] = sigmoid(motor_controls[0])
    activation_left = 1+(motor_controls[0])*99
    activation_right = 1+(1- motor_controls[0])*99

    print("activation left:", activation_left, ". activation right:",activation_right, ". time:", motor_controls[1]*100)

# start movement

# main

import cv2
import numpy as np
import time
from torch.autograd import Variable
print("start training")
losses=[]
losses_end_of_epoch=[]
number_of_steps_each_epoch=[]
loss_function = nn.MSELoss(reduction='mean')

# each epoch

for epoch in range(2):
    count=0
    target_reached=False
    while target_reached==False:
        print("epoch: ", epoch, ". step:", count)

### process and take picture

        indices = process_picture()

### binary_network(sliced)=indices as input for policy model

        optimizer.zero_grad()

### output: 1 for curvature, 1 for duration of movement

        motor_controls = policy_model(Variable(torch.from_numpy(indices))).detach().numpy()
        print("NO TANH output for motor: 1)activation left, 2)time ", motor_controls)
        motor_controls[0] = np.tanh(motor_controls[0])
        motor_controls[1] = np.tanh(motor_controls[1])
        print("TANH output for motor: 1)activation left, 2)time ", motor_controls)

### execute suggested action

        move(motor_controls)

### take and process picture2 (after movement)

        indices = (process_picture())

### loss=(binary_network(picture2) - desired

        print("calculate loss")
        print("idx", indices, type(torch.tensor(indices)))
     #   loss = 0
      #  loss = (indices[0]-10)**2+(indices[1]-31)**2
       # loss = loss/2
        print("shape of indices", indices.shape)
        array=np.zeros((1,2))
        array[0]=indices
        print(array.shape, type(array))
        array2 = torch.ones([1,2])
        loss = loss_function(torch.tensor(array).double(), torch.tensor([[10.0,31.0]]).double()).float()
        print("loss: ", loss, type(loss), loss.shape)
       # array2[0] = loss_function(torch.tensor(array).double(), 
        torch.tensor([[10.0,31.0]]).double()).float()
        losses.append(loss)

# start line causing the error-message (still part of main)

### calculate gradients

        loss.backward()

# end line causing the error-message (still part of main)

### apply gradients

        optimizer.step()

# Output (so far as intented) (not all included)

# calculate loss

idx [14. 15.] <class 'torch.Tensor'>
shape of indices (2,)
(1, 2) <class 'numpy.ndarray'>
loss:  tensor(136.) <class 'torch.Tensor'> torch.Size([])

# Error Message:

Traceback (most recent call last):
  File "/home/pi/Desktop/GradientPolicyLearning/PolicyModel.py", line 259, in <module>
    array2.backward()
  File "/home/pi/.local/lib/python3.7/site-packages/torch/tensor.py", line 134, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/pi/.local/lib/python3.7/site-packages/torch/autograd/__init__.py", line 99, in 
 backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
p4rjhz4m

p4rjhz4m1#

如果对预测调用.detach(),则会删除渐变。由于您首先从模型中获取索引,然后尝试支持错误,因此我建议

prediction = policy_model(torch.from_numpy(indices))
motor_controls = prediction.clone().detach().numpy()

这将使预测与计算的梯度保持不变,这些梯度可以得到支持。
现在你可以做

loss = loss_function(prediction, torch.tensor([[10.0,31.0]]).double()).float()

注意,如果它抛出错误,你可能会调用双倍的预测。

x8diyxa7

x8diyxa72#

如果损耗不是直接从PyTorch网络的输出计算出来的,那么就不可能计算出梯度,因为这样你就不能应用链式规则来优化梯度。

lo8azlld

lo8azlld3#

简单的解决方案是,打开将渐变计算设置为打开的上下文管理器,如果它是关闭的

torch.set_grad_enabled(True)  # Context-manager
wqlqzqxt

wqlqzqxt4#

确保您在NN中的所有输入、NN的输出和基本真值/目标值都是torch.tensor类型,而不是list、numpy.array或任何其他可迭代类型。

另外,也要确保它们在任何时候都不会转换为list或numpy.array。

在我的例子中,我得到这个错误是因为我对包含来自NN的预测值的Tensor执行了列表理解。我这样做是为了获得每行中的最大值。然后,将列表转换回torch.tensor。在计算损失之前。

这种来回转换会禁用渐变计算

dfty9e19

dfty9e195#

在我的例子中,我通过在定义输入Tensor时指定requires_grad=True来克服这个错误

import numpy as np
import matplotlib.pyplot as plt
plt.style.use('dark_background')

# define rosenbrock function and gradient

a = 1
b = 5
def f(x):
   return (a - x[0])**2 + b * (x[1] - x[0]**2)**2

def jac(x):
   dx1 = -2 * a + 4 * b * x[0]**3 - 4 * b * x[0] * x[1] + 2 * x[0]
   dx2 = 2 * b * (x[1] - x[0]**2)
   return np.array([dx1, dx2])

# create stochastic rosenbrock function and gradient

def f_rand(x):
   return f(x) * np.random.uniform(0.5, 1.5)

def jac_rand(x): return jac(x) * np.random.uniform(0.5, 1.5)

# use hand coded adam

x = np.array([0.1, 0.1])
x0 = x.copy()
j = jac_rand(x)
beta1=0.9
beta2=0.999
eps=1e-8
m = x * 0
v = x * 0
learning_rate = .1
for ii in range(200):
   m = (1 - beta1) * j + beta1 * m  # first  moment estimate.
   v = (1 - beta2) * (j**2) + beta2 * v  # second moment estimate.
   mhat = m / (1 - beta1**(ii + 1))  # bias correction.
   vhat = v / (1 - beta2**(ii + 1))
   x = x - learning_rate * mhat / (np.sqrt(vhat) + eps)
   x -= learning_rate * v
   j = jac_rand(x)

print('hand code finds optimal to be ', x, f(x))

# attempt to use pytorch

import torch
x_tensor = torch.tensor(x0, requires_grad=True)
optimizer = torch.optim.Adam([x_tensor], lr=learning_rate)

def closure():
   optimizer.zero_grad()
   loss = f_rand(x_tensor)
   loss.backward()
   return loss

for ii in range(200):
   optimizer.step(closure)

print('My PyTorch attempt found ', x_tensor, f(x_tensor))
bfhwhh0e

bfhwhh0e6#

以下内容对我很管用:

Loss.requires_grad=True

Loss.Backward()

相关问题