pytorch 为什么我收到错误ValueError:预期输入批次大小(4)是否与目标批次大小(64)匹配?

elcex8rz  于 2023-01-26  发布在  其他
关注(0)|答案(2)|浏览(394)

为什么会出现错误ValueError: Expected input batch_size (4) to match target batch_size (64)
这是不是和第一个线性层的通道数不正确有关?在这个例子中,我的通道数是128 *4 *4。
我试着在网上和这个网站上寻找答案,但我一直无法找到它。所以,我问这里。
以下是网络:

class Net(nn.Module):
    """A representation of a convolutional neural network comprised of VGG blocks."""
    def __init__(self, n_channels):
        super(Net, self).__init__()
        # VGG block 1
        self.conv1 = nn.Conv2d(n_channels, 64, (3,3))
        self.act1 = nn.ReLU()
        self.pool1 = nn.MaxPool2d((2,2), stride=(2,2))
        # VGG block 2
        self.conv2 = nn.Conv2d(64, 64, (3,3))
        self.act2 = nn.ReLU()
        self.pool2 = nn.MaxPool2d((2,2), stride=(2,2))
        # VGG block 3
        self.conv3 = nn.Conv2d(64, 128, (3,3))
        self.act3 = nn.ReLU()
        self.pool3 = nn.MaxPool2d((2,2), stride=(2,2))
        # Fully connected layer
        self.f1 = nn.Linear(128 * 4 * 4, 1000)
        self.act4 = nn.ReLU()
        # Output layer
        self.f2 = nn.Linear(1000, 10)
        self.act5 = nn.Softmax(dim=1)

    def forward(self, X):
        """This function forward propagates the input."""
        # VGG block 1
        X = self.conv1(X)
        X = self.act1(X)
        X = self.pool1(X)
        # VGG block 2
        X = self.conv2(X)
        X = self.act2(X)
        X = self.pool2(X)
        # VGG block 3
        X = self.conv3(X)
        X = self.act3(X)
        X = self.pool3(X)
        # Flatten
        X = X.view(-1, 128 * 4 * 4)
        # Fully connected layer
        X = self.f1(X)
        X = self.act4(X)
        # Output layer
        X = self.f2(X)
        X = self.act5(X)

        return X

以下是培训循环:

def training_loop(
        n_epochs,
        optimizer,
        model,
        loss_fn,
        train_loader):
    for epoch in range(1, n_epochs + 1):
        loss_train = 0.0
        for i, (imgs, labels) in enumerate(train_loader):

            outputs = model(imgs)

            loss = loss_fn(outputs, labels)

            optimizer.zero_grad()

            loss.backward()

            optimizer.step()

            loss_train += loss.item()

        if epoch == 1 or epoch % 10 == 0:
            print('{} Epoch {}, Training loss {}'.format(
                datetime.datetime.now(),
                epoch,
                loss_train / len(train_loader)))
lrl1mhuk

lrl1mhuk1#

就像没勇气的孩子说的,你的尺寸不对!
对于正在回顾/学习神经网络的其他人,更一般地说,您可以通过以下公式计算单个卷积层的输出维度
[(W−K+2P)/S]+1
其中

W is the input volume - in your case you have not given us this
K is the Kernel size - in your case 2 == "filter" 
P is the padding - in your case 2
S is the stride - in your case 3

另一个更漂亮的公式:

x6h2sr28

x6h2sr282#

那是因为你得到的维度是错误的,从错误和你的评论来看,我认为你的输入是(64, 1, 28, 28)的形状。
现在,XX = self.pool3(X)处的形状是(64, 128, 1, 1),然后在下一行将其整形为(4, 128 * 4 * 4)
简而言之,模型的输出是(4, 10),即batch_size (4),您在这一行loss = loss_fn(outputs, labels)上将其与batch_size (64)的Tensor进行比较,如错误所示。
我不知道你想做什么,但我猜你想把这行self.f1 = nn.Linear(128 * 4 * 4, 1000)改为self.f1 = nn.Linear(128 * 1 * 1, 1000)

相关问题