pytorch CNN模型上的nn.AdaptiveAvgPool2d和nn.Dropout层之间是否存在问题?

aamkag61  于 2022-12-18  发布在  其他
关注(0)|答案(1)|浏览(296)

我正在编写一个模型来为一个学校项目执行图像分类。我有10个类,我在我的模型上批量加载图像:

import torch
import torch.nn as nn
import torch.nn.functional as F

# *****START CODE
class ConvNet(nn.Module):
    def __init__(self, in_ch, out_ch):
      super(ConvNet, self).__init__()
      """
      Number of layers should be exactly same as in the provided JSON. 
      Do not use any grouping function like Sequential 
      """
      self.Layer_001 = nn.Conv2d(in_channels=in_ch, out_channels=64, kernel_size=3, padding=1)
      self.Layer_002 = nn.ReLU()
      self.Layer_003 = nn.MaxPool2d(kernel_size=2,stride=2)
      self.Layer_004 = nn.Conv2d(in_channels=64, out_channels=113, kernel_size=3, padding=1)
      self.Layer_005 = nn.ReLU()
      self.Layer_006 = nn.MaxPool2d(kernel_size=2,stride=2)
      self.Layer_007 = nn.Conv2d(in_channels=113, out_channels=248, kernel_size=3, padding=1)
      self.Layer_008 = nn.ReLU()
      self.Layer_009 = nn.Conv2d(in_channels=248, out_channels=248, kernel_size=3, padding=1)
      self.Layer_010 = nn.ReLU()
      self.Layer_011 = nn.MaxPool2d(kernel_size=2,stride=2)
      self.Layer_012 = nn.Conv2d(in_channels=248, out_channels=519, kernel_size=3, padding=1)
      self.Layer_013 = nn.ReLU()
      self.Layer_014 = nn.Conv2d(in_channels=519, out_channels=519, kernel_size=3, padding=1)
      self.Layer_015 = nn.ReLU()
      self.Layer_016 = nn.MaxPool2d(kernel_size=2,stride=2)
      self.Layer_017 = nn.Conv2d(in_channels=519, out_channels=519, kernel_size=3, padding=1)
      self.Layer_018 = nn.ReLU()
      self.Layer_019 = nn.Conv2d(in_channels=519, out_channels=519, kernel_size=3, padding=1)
      self.Layer_020 = nn.ReLU()
      self.Layer_021 = nn.MaxPool2d(kernel_size=2,stride=2)
      self.Layer_022 = nn.AdaptiveAvgPool2d((1,1))
      self.Layer_023 = nn.Dropout(p=0.501816987002085)
      self.Layer_024 = nn.Linear(in_features=519,out_features=2317)
      self.Layer_025 = nn.ReLU()
      self.Layer_026 = nn.Linear(in_features=2317, out_features=3018)
      self.Layer_027 = nn.Linear(in_features=3018, out_features=3888)
      self.Layer_028 = nn.ReLU()
      self.Layer_029 = nn.Linear(in_features=3888, out_features=out_ch)
      
    def forward(self, x):
      x = self.Layer_001(x)
      #print(x.shape)
      x = self.Layer_002(x)
      #print(x.shape)
      x = self.Layer_003(x)
      #print(x.shape)
      x = self.Layer_004(x)
      #print(x.shape)
      x = self.Layer_005(x)
      #print(x.shape)
      x = self.Layer_006(x)
      #print(x.shape)
      x = self.Layer_007(x)
      #print(x.shape)
      x = self.Layer_008(x)
      #print(x.shape)
      x = self.Layer_009(x)
      #print(x.shape)
      x = self.Layer_009(x)
      #print(x.shape)
      x = self.Layer_010(x)
      #print(x.shape)
      x = self.Layer_011(x)
      #print(x.shape)
      x = self.Layer_012(x)
      #print(x.shape)
      x = self.Layer_013(x)
      #print(x.shape)
      x = self.Layer_014(x)
      #print(x.shape)
      x = self.Layer_015(x)
      #print(x.shape)
      x = self.Layer_016(x)
      #print(x.shape)
      x = self.Layer_017(x)
      #print(x.shape)
      x = self.Layer_018(x)
      #print(x.shape)
      x = self.Layer_019(x)
      #print(x.shape)
      x = self.Layer_020(x)
      #print(x.shape)
      x = self.Layer_021(x)
      #print(x.shape)
      x = self.Layer_022(x)
      #print(x.shape) 
      x = self.Layer_023(x)
      #print(x.shape)
      #x = nn.Flatten(x)
      ##print(x.shape)
      x = self.Layer_024(x)
      #print(x.shape)
      x = self.Layer_025(x)
      #print(x.shape)
      x = self.Layer_026(x)
      #print(x.shape)
      x = self.Layer_027(x)
      #print(x.shape)
      x = self.Layer_028(x)
      #print(x.shape)
      output = self.Layer_029(x)
      print(output.shape)
      return output

# *****END CODE

当我运行它的时候,我发现了一个层间错误
它将返回以下错误:

RuntimeError: mat1 and mat2 shapes cannot be multiplied (8304x1 and 519x2317)

我知道这是一个形状问题,但我正在学习,不明白它发生在哪里......我试图重建这个架构:

'Layer_001': {'input': 3,
               'kernel_size': 3,
               'output': 64,
               'padding': 1,
               'type': 'Conv2d'},
 'Layer_002': {'type': 'ReLU'},
 'Layer_003': {'kernel_size': 2, 'stride': 2, 'type': 'MaxPool2d'},
 'Layer_004': {'input': 64,
               'kernel_size': 3,
               'output': 113,
               'padding': 1,
               'type': 'Conv2d'},
 'Layer_005': {'type': 'ReLU'},
 'Layer_006': {'kernel_size': 2, 'stride': 2, 'type': 'MaxPool2d'},
 'Layer_007': {'input': 113,
               'kernel_size': 3,
               'output': 248,
               'padding': 1,
               'type': 'Conv2d'},
 'Layer_008': {'type': 'ReLU'},
 'Layer_009': {'input': 248,
               'kernel_size': 3,
               'output': 248,
               'padding': 1,
               'type': 'Conv2d'},
 'Layer_010': {'type': 'ReLU'},
 'Layer_011': {'kernel_size': 2, 'stride': 2, 'type': 'MaxPool2d'},
 'Layer_012': {'input': 248,
               'kernel_size': 3,
               'output': 519,
               'padding': 1,
               'type': 'Conv2d'},
 'Layer_013': {'type': 'ReLU'},
 'Layer_014': {'input': 519,
               'kernel_size': 3,
               'output': 519,
               'padding': 1,
               'type': 'Conv2d'},
 'Layer_015': {'type': 'ReLU'},
 'Layer_016': {'kernel_size': 2, 'stride': 2, 'type': 'MaxPool2d'},
 'Layer_017': {'input': 519,
               'kernel_size': 3,
               'output': 519,
               'padding': 1,
               'type': 'Conv2d'},
 'Layer_018': {'type': 'ReLU'},
 'Layer_019': {'input': 519,
               'kernel_size': 3,
               'output': 519,
               'padding': 1,
               'type': 'Conv2d'},
 'Layer_020': {'type': 'ReLU'},
 'Layer_021': {'kernel_size': 2, 'stride': 2, 'type': 'MaxPool2d'},
 'Layer_022': {'output': 'COMPUTE', 'type': 'AdaptiveAvgPool2d'},
 'Layer_023': {'p': 0.501816987002085, 'type': 'Dropout'},
 'Layer_024': {'input': 'COMPUTE', 'output': 2317, 'type': 'Linear'},
 'Layer_025': {'type': 'ReLU'},
 'Layer_026': {'input': 2317, 'output': 'COMPUTE', 'type': 'Linear'},
 'Layer_027': {'input': 3018, 'output': 3888, 'type': 'Linear'},
 'Layer_028': {'type': 'ReLU'},
 'Layer_029': {'input': 3888, 'output': 'COMPUTE', 'type': 'Linear'}


我认为我的错误来自“Layer_022”:{“输出”:“计算”,“类型”:“AdaptiveAvgPool 2d”}或从“图层_024”上的此位置:{“输入”:“计算”,“输出”:2317,“类型”:'Linear'}但我不确定...我的意思是我真的不知道如何计算这些值,这就是为什么我请求一些帮助:)
我已经尝试将519放在“Layer_022”的输出上:{“输出”:“计算”,“类型”:'AdaptiveAvgPool 2d'},我还尝试了不同的值,如(2)(2,2)...

aiazj4mn

aiazj4mn1#

你需要放一个nn.Flatten()进去。你已经在代码中创建了一个flated层,但是你需要像放其他层一样放进去。一个类似的方法是在你的forward调用中调用x = self.Layer_023(x).view([x.shape[0],-1]),以获得[batch x feats]的大小。
例如:

In [3]: a = torch.randn([16,200,1,1])

In [4]: b = torch.nn.Linear(200,100)

In [5]: b(a)
    RuntimeError: mat1 and mat2 shapes cannot be multiplied (3200x1 and 200x100)

In [6]: b(a.view([a.shape[0],-1]))
Out[6]: 
    tensor([[ 1.2927, -0.0799,  0.3909,  ...,  0.5051,  0.4727, -0.1759],
    [-0.2969,  0.2622,  0.6283,  ..., -0.8404, -0.7275, -0.2853],
    [ 0.3116,  0.2436, -1.0069,  ...,  1.9674, -0.3689, -0.1099],
    ...,
    [-0.6393,  0.3817,  0.0246,  ...,  0.1511, -0.9695,  0.6455],
    [ 0.0390, -0.7878,  0.3007,  ...,  0.8577, -0.2808, -0.2726],
    [ 0.1561,  0.0472, -0.0222,  ...,  0.9957, -0.4121, -0.1465]],
   grad_fn=<AddmmBackward0>)

相关问题