当我尝试在this dataset上训练this时,我遇到了以下错误。
由于这是论文中公布的配置,我假设我正在做一些令人难以置信的错误。
每次尝试运行训练时,此错误都会出现在不同的图像上。
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1741, in <module>
main()
File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1735, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Noam/Code/vision_course/hopenet/deep-head-pose/code/original_code_augmented/train_hopenet_with_validation_holdout.py", line 187, in <module>
loss_reg_yaw = reg_criterion(yaw_predicted, label_yaw_cont)
File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\modules\loss.py", line 431, in forward
return F.mse_loss(input, target, reduction=self.reduction)
File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\functional.py", line 2204, in mse_loss
ret = torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
RuntimeError: reduce failed to synchronize: cudaErrorAssert: device-side assert triggered
有什么想法吗
1条答案
按热度按时间pdsfdshx1#
这种错误通常发生在使用
NLLLoss
或CrossEntropyLoss
时,以及数据集具有负标签(或标签大于类的数量)时。这也是Assertt >= 0 && t < n_classes
失败的确切错误。这不会发生在
MSELoss
上,但是OP提到在某个地方有一个CrossEntropyLoss
,因此发生了错误(程序在另一行异步崩溃)。解决方案是清理数据集并确保满足t >= 0 && t < n_classes
(其中t
表示标签)。此外,如果使用
NLLLoss
或BCELoss
,请确保网络输出在0到1的范围内(然后需要分别激活softmax
或sigmoid
)。注意,CrossEntropyLoss
或BCEWithLogitsLoss
不需要这样做,因为它们在损失函数中实现了激活函数。(感谢@PouyaB指出)。