在Pytorch中估计混合高斯模型

ggazkfy8  于 12个月前  发布在  其他
关注(0)|答案(1)|浏览(151)

实际上,我想估计一个以高斯混合模型为基础分布的归一化流,所以我有点坚持使用torch。然而,你可以在我的代码中重现我的错误,只需在torch中估计高斯混合模型。我的代码如下:

import numpy as np
import matplotlib.pyplot as plt
import sklearn.datasets as datasets

import torch
from torch import nn
from torch import optim
import torch.distributions as D

num_layers = 8
weights = torch.ones(8,requires_grad=True).to(device)
means = torch.tensor(np.random.randn(8,2),requires_grad=True).to(device)#torch.randn(8,2,requires_grad=True).to(device)
stdevs = torch.tensor(np.abs(np.random.randn(8,2)),requires_grad=True).to(device)
mix = D.Categorical(weights)
comp = D.Independent(D.Normal(means,stdevs), 1)
gmm = D.MixtureSameFamily(mix, comp)

num_iter = 10001#30001
num_iter2 = 200001
loss_max1 = 100
for i in range(num_iter):
    x = torch.randn(5000,2)#this can be an arbitrary x samples
    loss2 = -gmm.log_prob(x).mean()#-densityflow.log_prob(inputs=x).mean()
    optimizer1.zero_grad()
    loss2.backward()
    optimizer1.step()

字符串
我得到的错误是:

0
8.089411823514835
Traceback (most recent call last):

  File "/home/cameron/AnacondaProjects/gmm.py", line 183, in <module>
    loss2.backward()

  File "/home/cameron/anaconda3/envs/torch/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)

  File "/home/cameron/anaconda3/envs/torch/lib/python3.7/site-packages/torch/autograd/__init__.py", line 132, in backward
    allow_unreachable=True)  # allow_unreachable flag

RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time.


之后,如您所见,模型运行了1次迭代。

ar7v8xwq

ar7v8xwq1#

代码中存在排序问题,因为您在训练循环之外创建了高斯混合模型,因此在计算损失时,高斯混合模型将尝试使用您在定义模型时设置的参数的初始值,但optimizer1.step()已经修改了该值,因此即使您设置loss2.backward(retain_graph=True),仍然会出现错误:RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
这个问题的解决方案是只要在更新参数时创建新的高斯混合模型,示例代码按预期运行:

import numpy as np
import matplotlib.pyplot as plt
import sklearn.datasets as datasets

import torch
from torch import nn
from torch import optim
import torch.distributions as D

num_layers = 8
weights = torch.ones(8,requires_grad=True)
means = torch.tensor(np.random.randn(8,2),requires_grad=True)
stdevs = torch.tensor(np.abs(np.random.randn(8,2)),requires_grad=True)

parameters = [weights, means, stdevs]
optimizer1 = optim.SGD(parameters, lr=0.001, momentum=0.9)

num_iter = 10001
for i in range(num_iter):
    mix = D.Categorical(weights)
    comp = D.Independent(D.Normal(means,stdevs), 1)
    gmm = D.MixtureSameFamily(mix, comp)

    optimizer1.zero_grad()
    x = torch.randn(5000,2)#this can be an arbitrary x samples
    loss2 = -gmm.log_prob(x).mean()#-densityflow.log_prob(inputs=x).mean()
    loss2.backward()
    optimizer1.step()

    print(i, loss2)

字符串

相关问题