keras 迁移学习模型学习不多,确认平台在45%,而培训上升到90%

nuypyhwy  于 2023-01-21  发布在  其他
关注(0)|答案(1)|浏览(116)

所以我花了很多时间在这个图像分类模型上。我有70000张图像和375个类。我试着用Vgg 16,Xception,Resnet和Mobilenet训练它......我总是得到相同的45%的验证限制。
As you can see here
我试过添加丢失层和正则化,它得到了相同的验证结果,数据扩增也没有多大帮助,你知道为什么这不起作用吗?
下面是我使用的上一个模型的代码片段:

from keras.models import Sequential
from keras.layers import Dense
from tensorflow import keras
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from keras import regularizers
from PIL import Image
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True

validation_datagen = ImageDataGenerator(rescale=1./255)
target_size = (height, width)

datagen = ImageDataGenerator(rescale=1./255,
    validation_split=0.2)

train_generator = datagen.flow_from_directory(
    path,
    target_size=(height, width),
    batch_size=batchSize,
    shuffle=True,
    class_mode='categorical',
    subset='training')

validation_generator = datagen.flow_from_directory(
    path,
    target_size=(height, width),
    batch_size=batchSize,
    class_mode='categorical',
    subset='validation')

num_classes = len(train_generator.class_indices)


xception_model = Xception(weights='imagenet',input_shape=(width, height, 3), include_top=False,classes=num_classes)
x = xception_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(512, activation='relu')(x)
out = Dense(num_classes, activation='softmax')(x)

opt = Adam()
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])

n_时间点= 15
历史记录=model.fit(训练生成器,每个epoch的步骤数=训练生成器.样本//批次大小,验证数据=验证生成器,验证步骤数=验证生成器.样本//批次大小,详细信息=1,epoch = n_epoch)

vmdwslir

vmdwslir1#

是的,您可能需要数据集中每个类别之间的平衡数据集,以获得更好的模型训练性能。请通过更改class_mode='sparse'loss='sparse_categorical_crossentropy'重试,因为您正在使用图像数据集。另外,冻结预训练模型层'xception_model.trainable = False'
检查以下代码:(我使用了5个类的花卉数据集)

xception_model = tf.keras.applications.Xception(weights='imagenet',input_shape=(width, height, 3), include_top=False,classes=num_classes)
xception_model.trainable = False
x = xception_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(32, activation='relu')(x)
out = Dense(num_classes, activation='softmax')(x)

opt = tf.keras.optimizers.Adam()
model = keras.Model(inputs=xception_model.input, outputs=out)
model.compile(optimizer=opt, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
history = model.fit(train_generator,  epochs=10, validation_data=validation_generator)

输出:

Epoch 1/10
217/217 [==============================] - 23s 95ms/step - loss: 0.5945 - accuracy: 0.7793 - val_loss: 0.4610 - val_accuracy: 0.8337
Epoch 2/10
217/217 [==============================] - 20s 91ms/step - loss: 0.3439 - accuracy: 0.8797 - val_loss: 0.4550 - val_accuracy: 0.8419
Epoch 3/10
217/217 [==============================] - 20s 93ms/step - loss: 0.2570 - accuracy: 0.9150 - val_loss: 0.4437 - val_accuracy: 0.8384
Epoch 4/10
217/217 [==============================] - 20s 91ms/step - loss: 0.2040 - accuracy: 0.9340 - val_loss: 0.4592 - val_accuracy: 0.8477
Epoch 5/10
217/217 [==============================] - 20s 91ms/step - loss: 0.1649 - accuracy: 0.9494 - val_loss: 0.4686 - val_accuracy: 0.8512
Epoch 6/10
217/217 [==============================] - 20s 92ms/step - loss: 0.1301 - accuracy: 0.9589 - val_loss: 0.4805 - val_accuracy: 0.8488
Epoch 7/10
217/217 [==============================] - 20s 93ms/step - loss: 0.0966 - accuracy: 0.9754 - val_loss: 0.4993 - val_accuracy: 0.8442
Epoch 8/10
217/217 [==============================] - 20s 91ms/step - loss: 0.0806 - accuracy: 0.9806 - val_loss: 0.5488 - val_accuracy: 0.8372
Epoch 9/10
217/217 [==============================] - 20s 91ms/step - loss: 0.0623 - accuracy: 0.9864 - val_loss: 0.5802 - val_accuracy: 0.8360
Epoch 10/10
217/217 [==============================] - 22s 100ms/step - loss: 0.0456 - accuracy: 0.9896 - val_loss: 0.6005 - val_accuracy: 0.8360

相关问题