PyTorch中损失函数中NN的输入导数

kq4fsx7k  于 2023-11-19  发布在  其他
关注(0)|答案(1)|浏览(91)

我尝试用PyTorch中的MLP近似一个非线性函数$V(x):\mathbb {R}^n\to \mathbb{R}_+$,例如V_x = model(x)。
只有$N$个$\nabla V^T(x)= \frac{\partial V(x)}{\partial x}$的样本可用。因此,我有一个维度为$N\times n$的矩阵S,其中包含所有样本。
损失应该是S和$\frac{\partial}{\partial x}$ V_x之间的均方误差。
我的问题是,我不知道如何在PyTorch中计算$\frac{\partial}{\partial x}$ V_x,这样它就不会失去对网络权重的依赖性。
我添加了一个最小的示例,它表明由于缺少依赖项,损失并没有减少。

import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim

model = nn.Sequential(
    nn.Linear(2, 10),
    nn.ReLU(),
    nn.Linear(10, 10),
    nn.ReLU(),
    nn.Linear(10, 1)
)

model.float()
loss_fn = nn.MSELoss()  
optimizer = optim.Adam(model.parameters(), lr=0.1)

# Generate Samples 
# V(x) = x^T P x
# grad V(x) = 2Px
P = np.matrix([[20.1892, -26.6218],[-26.6218, 38.0375]])
N_S = 10
N = N_S**2 # amount of samples
x_1 = np.linspace(-3,3,N_S)
x_2 = np.linspace(-3,3,N_S)
x = np.array([(a,b) for a in x_1 for b in x_2])
S = np.zeros((N,2))
for i in range(N):
    S[i,:]=2*P@x[i,:]
        

# training
epoch = 1

while epoch<1000:
    S_tensor = torch.from_numpy(S).float()
    x_tensor = torch.from_numpy(x).float()

    grad_V_x = torch.autograd.functional.jacobian(model,x_tensor)
    grad_V_x.requires_grad_()

    loss = loss_fn(grad_V_x,S_tensor)

    optimizer.zero_grad() # reset gradients
    loss.backward() # calculate gradient 
    optimizer.step() # update weights

    print(f"epoch {epoch} loss {loss}")
    epoch = epoch+1

字符串
感谢您的帮助!

zf9nrax1

zf9nrax11#

取代

while epoch<1000:
    S_tensor = torch.from_numpy(S).float()
    x_tensor = torch.from_numpy(x).float()

    grad_V_x = torch.autograd.functional.jacobian(model,x_tensor)
    grad_V_x.requires_grad_()

    loss = loss_fn(grad_V_x,S_tensor)

    optimizer.zero_grad() # reset gradients
    loss.backward() # calculate gradient 
    optimizer.step() # update weights

    print(f"epoch {epoch} loss {loss}")
    epoch = epoch+1

字符串

while epoch<1000:
    S_tensor = torch.from_numpy(S).float()
    x_tensor = torch.from_numpy(x).float()
    
    x_tensor.requires_grad = True
    # Calculate the gradient
    V_x = model(x_tensor)
    grad_V_x = torch.autograd.grad(outputs=V_x, inputs=x_tensor, grad_outputs=torch.ones_like(V_x), create_graph=True)
    loss = loss_fn(grad_V_x[0], S_tensor)

    optimizer.zero_grad()  # reset gradients
    loss.backward()  # calculate gradient
    optimizer.step()  # update weights

    print(f"epoch {epoch} loss {loss}")
    epoch = epoch+1


损失正在减少。


的数据
有关torch.autograd.grad的更多信息,请参阅:https://pytorch.org/docs/stable/generated/torch.autograd.grad.html

相关问题