pytorch 运行时错误:输入类型(torch.FloatTensor)和权重类型(torch.cuda.FloatTensor)应该相同

vybvopom  于 2022-11-09  发布在  其他
关注(0)|答案(8)|浏览(626)

这一点:

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

for data in dataloader:
    inputs, labels = data
    outputs = model(inputs)

给出错误:
运行时错误:输入类型(torch.FloatTensor)和权重类型(torch.cuda.FloatTensor)应该相同

kx7yvsdv

kx7yvsdv1#

你得到这个错误是因为你的模型在GPU上,而你的数据在CPU上。所以,你需要把你的输入Tensor发送到GPU。

inputs, labels = data                         # this is what you had
inputs, labels = inputs.cuda(), labels.cuda() # add this line

或者像这样,与代码的其余部分保持一致:

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

inputs, labels = inputs.to(device), labels.to(device)

如果输入Tensor在GPU上,而模型权重不在GPU上,则会出现相同的错误。在这种情况下,您需要将模型权重发送到GPU。

model = MyModel()

if torch.cuda.is_available():
    model.cuda()

请参阅cuda()及其对立面cpu()的说明文件。

9vw9lbht

9vw9lbht2#

The new API is to use .to() method.
The advantage is obvious and important. Your device may tomorrow be something other than "cuda":

  • cpu
  • cuda
  • mkldnn
  • opengl
  • opencl
  • ideep
  • hip
  • msnpu
  • xla

So try to avoid model.cuda() It is not wrong to check for the device

dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")

or to hardcode it:

dev=torch.device("cuda")

same as:

dev="cuda"

In general you can use this code:

model.to(dev)
data = data.to(dev)
yshpjwxd

yshpjwxd3#

正如前面的答案所提到的,问题可能是模型在GPU上训练,但在CPU上测试。如果是这种情况,那么您需要将模型的权重和数据从GPU移植到CPU,如下所示:

device = args.device # "cuda" / "cpu"
if "cuda" in device and not torch.cuda.is_available():
    device = "cpu"
data = data.to(device)
model.to(device)

注意:这里我们仍然检查配置参数是设置为GPU还是CPU,以便这段代码可以用于训练(在GPU上)和测试(在CPU上)。

b4lqfgs4

b4lqfgs44#

加载模型时,权重和输入必须在同一个设备中,我们可以使用其他人指出的.to(device)来实现这一点。
然而,也可能出现保存的权重和输入Tensor的数据类型不同的情况。如果是这种情况,那么我们必须同时更改模型权重和输入的数据类型:

model = torch.load(PATH).type(torch.FloatTensor).to(device)
input = input.type(torch.FloatTensor).to(device)
lyr7nygr

lyr7nygr5#


* when you get this error::RuntimeError: Input type

   (torch.FloatTensor) and weight type (torch.cuda.FloatTensor should 
   be the same
   # Move tensors to GPU is CUDA is available
   # Check if CUDA is available

  train_on_gpu = torch.cuda.is_available()

  If train_on_gpu:
      print("CUDA is available! Training on GPU...")
  else:
      print("CUDA is not available. Training on CPU...")

 -------------------
 # Move tensors to GPU is CUDA is available
if train_on_gpu:

model.cuda()
2ic8powd

2ic8powd6#

我有同样的问题,我的CNN模型:

class CNN(nn.Module):
   def __init__(self):
      super(CNN,self).__init__()
      self.device = torch.device(device)
      self.dummy_param = nn.Parameter(torch.empty(0))
      l1 = nn.Conv2d(3, 64,    kernel_size=(3, 3), stride=(1, 1), padding= (1,1)).to(device)
      l2 = nn.Conv2d(64, 128,  kernel_size=(3, 3), stride=(1, 1), padding=(1,1)).to(device)
      l3 = nn.Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)).to(device)
      l4 = nn.Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)).to(device)
      l5 = nn.Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)).to(device)
      self.layers = [l1,l2,l3,l4,l5]
      self.layers = [l1,l2]

  def forward(self,x):
    features = []
    for l in self.layers:

      x = l(x)
      features.append(x)
  return features

我为Conv2d.to(装置)它为我工作。

1cosmwyk

1cosmwyk7#

x = x.to(device, dtype=torch.float32)

y = y.to(device, dtype=torch.float32)

工作,完美的罚款...

bt1cpqcv

bt1cpqcv8#

首先检查cuda是否可用:

if torch.cuda.is_available():
      device = 'cuda'
  else:
      device = 'cpu'

如果您要加载某个模型,请执行以下操作:

checkpoint = torch.load('./generator_release.pth', map_location=device)
  G = Generator().to(device)

现在,您可能会看到以下错误:
运行时错误:输入类型(torch.FloatTensor)和权重类型(torch.cuda.FloatTensor)应该相同
需要通过以下方式将输入数据类型从torch.tensor转换为torch.cuda.tensor:

if torch.cuda.is_available():
  data = data.cuda()
result = G(data)

然后将结果从torch.cuda.tensor转换为torch.tensor:

if torch.cuda.is_available():
    result = result.cpu()

相关问题