numpy 索引1超出softmax函数的大小为1的轴0的界限

pjngdqdw  于 2023-06-23  发布在  其他
关注(0)|答案(1)|浏览(112)

我尝试编写一段代码,计算cifar10的每张图片的hinge loss和softmax loss,我得到了这个错误:softmax_loss函数中日志行的“index 1 is out of bounds for axis 0 with size 1”。我该怎么解决这个问题?

import numpy as np
from keras.datasets import cifar10

# define the weight matrix
W = np.random.rand(10, 3072)

# load CIFAR-10 dataset
(x_train, y_train), (x_test, y_test) = cifar10.load_data()

# flatten the images
x_train_flat = x_train.reshape(x_train.shape[0], -1)

# compute the Hinge Loss for each image
def hinge_loss(y_true, y_pred):
    margin = 1.
    loss = np.maximum(0., margin - y_true * y_pred)
    return loss
# compute the Softmax loss for each image
def softmax_loss(y_true, y_pred):
    num_classes = y_pred.shape[1]
    softmax = np.exp(y_pred) / np.sum(np.exp(y_pred), axis=1, keepdims=True)
    loss = -np.log(softmax[range(num_classes), y_true])
    return loss

hinge_losses = []
softmax_losses = []

# iterate over each image in the training set
for i in range(x_train_flat.shape[0]):
    # calculate predictions
    x_i = x_train_flat[i, :]
    y_i = np.dot(W, x_i)
    
    # calculate the loss using the Hinge Loss function
    y_true_h = np.zeros_like(y_i)
    y_true_h[y_train[i]] = 1
    hinge_loss_value = np.sum(hinge_loss(y_true_h, y_i))
    hinge_losses.append(hinge_loss_value)
    
    # calculate the loss using the Softmax function
    y_true_s = y_train[i]
    softmax_loss_value = np.sum(softmax_loss(y_true_s, y_i.reshape(1, -1)))
    softmax_losses.append(softmax_loss_value)

# print the average loss values
print("Average Hinge Loss:", np.mean(hinge_losses))
print("Average Softmax Loss:", np.mean(softmax_losses))

正如我所说,我得到错误“index 1 is out of bounds for axis 0 with size 1”for this line:

loss = -np.log(softmax[range(num_classes), y_true])
goucqfw6

goucqfw61#

首先,softmax是一个激活函数,对应的损失称为交叉熵损失。在我看来,你只是想选择y_true对应的元素的softmax值,以便计算损失。我相信你可以简单地更新你的softmax_loss(交叉熵损失!)寄往:

def softmax_loss(y_true, y_pred):
    softmax = np.exp(y_pred) / np.sum(np.exp(y_pred))
    loss = -np.log(softmax[0, y_true])
    return loss

虽然看起来你的y_pred值正在爆炸,因为你没有规范化你的数据集x_train_flat。首先考虑归一化,即:

x_train_flat = x_train.reshape(x_train.shape[0], -1) / 255

但看起来你也应该减轻体重例如:

W = np.random.rand(10, 3072) / 3072

相关问题