pytorch 类型错误:无法将“torch.cuda.FloatTensor”指定为参数“weight_hh_l0”(应为torch.nn.参数或“无”)

u0njafvf  于 2023-01-09  发布在  其他
关注(0)|答案(2)|浏览(799)

我正在尝试训练在此repo https://bitbucket.org/VioletPeng/language-model/src/master/中实现的模型(第二个模型:标题到标题-故事情节到故事模型)
第一个时期的训练会很顺利,但当它试图调用train函数开始第二个时期时,一切都中断了,我得到了以下错误:

TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'weight_hh_l0' (torch.nn.Parameter or None expected)

我不知道问题是什么,我尝试查找此错误,并尝试将.cuda更改为.to(device),并在可能时在Tensor初始化中使用device=。
但这一切似乎都无济于事。
以下是完整的异常堆栈跟踪:

File "pytorch_src/main.py", line 253, in <module>
    train()
  File "pytorch_src/main.py", line 209, in train
    output, hidden, rnn_hs, dropped_rnn_hs = model(data, hidden, return_h=True)
  File "/home/e/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/e/Documents/Amal/language-model/pytorch_src/model.py", line 81, in forward
    raw_output, new_h = rnn(raw_output, hidden[l])
  File "/home/e/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/e/Documents/Amal/language-model/pytorch_src/weight_drop.py", line 47, in forward
    self._setweights()
  File "/home/e/Documents/Amal/language-model/pytorch_src/weight_drop.py", line 44, in _setweights
    setattr(self.module, name_w, w)
  File "/home/e/anaconda3/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 108, in __setattr__
    super(RNNBase, self).__setattr__(attr, value)
  File "/home/e/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 801, in __setattr__
    .format(torch.typename(value), name))
gzszwxb4

gzszwxb41#

我把我的python降级到3.6,重新安装了所有的需求,它工作了。
所以问题可能是一个不兼容的torch版本。

elcex8rz

elcex8rz2#

更新版本的PyTorch需要参数torch.nn.Parameter。我认为您需要更改代码如下,至少,它帮助我解决了基于相同代码库的代码中的相同错误:

def _setweights(self):
    for name_w in self.weights:
        raw_w = getattr(self.module, name_w + '_raw')
        w = None
        w = torch.nn.functional.dropout(raw_w, p=self.dropout, training=self.training)
        setattr(self.module, name_w, torch.nn.Parameter(w))

相关问题