keras 在训练模型的现有层中添加正则化器而不重置权重?

0ve6wy6x  于 2023-06-06  发布在  其他
关注(0)|答案(7)|浏览(353)

假设我正在通过Inception进行迁移学习。我加了几层,训练了一会儿。
以下是我的模型拓扑结构:

base_model = InceptionV3(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu', name = 'Dense_1')(x)
predictions = Dense(12, activation='softmax', name = 'Predictions')(x)
model = Model(input=base_model.input, output=predictions)

我对这个模型训练了一段时间,保存后再次加载进行再训练;这一次我想把l2正则化器添加到Dense_1而不重置权重?这可能吗?

path = .\model.hdf5
from keras.models import load_model
model = load_model(path)

docs show只显示了当你初始化一个层时正则化器可以作为参数添加:

from keras import regularizers
model.add(Dense(64, input_dim=64,
                kernel_regularizer=regularizers.l2(0.01),
                activity_regularizer=regularizers.l1(0.01)))

这实质上是创建一个新层,所以我的层的权重将被重置。
编辑:
所以我在过去的几天里一直在玩代码,当我加载模型时(在使用新的正则化器进行了一点训练之后),我的损失发生了一些奇怪的事情。
所以我第一次运行这段代码(第一次使用新的正则化器):

from keras.models import load_model
base_model = load_model(path)
x = base_model.get_layer('dense_1').output
predictions = base_model.get_layer('dense_2')(x)
model = Model(inputs = base_model.input, output = predictions)
model.get_layer('dense_1').kernel_regularizer = regularizers.l2(0.02) 

model.compile(optimizer=SGD(lr= .0001, momentum=0.90),
              loss='categorical_crossentropy',
              metrics = ['accuracy'])

我的训练输出似乎是正常的:

Epoch 43/50
 - 2918s - loss: 0.3834 - acc: 0.8861 - val_loss: 0.4253 - val_acc: 0.8723
Epoch 44/50
Epoch 00044: saving model to E:\Keras Models\testing_3\2018-01-18_44.hdf5
 - 2692s - loss: 0.3781 - acc: 0.8869 - val_loss: 0.4217 - val_acc: 0.8729
Epoch 45/50
 - 2690s - loss: 0.3724 - acc: 0.8884 - val_loss: 0.4169 - val_acc: 0.8748
Epoch 46/50
Epoch 00046: saving model to E:\Keras Models\testing_3\2018-01-18_46.hdf5
 - 2684s - loss: 0.3688 - acc: 0.8896 - val_loss: 0.4137 - val_acc: 0.8748
Epoch 47/50
 - 2665s - loss: 0.3626 - acc: 0.8908 - val_loss: 0.4097 - val_acc: 0.8763
Epoch 48/50
Epoch 00048: saving model to E:\Keras Models\testing_3\2018-01-18_48.hdf5
 - 2681s - loss: 0.3586 - acc: 0.8924 - val_loss: 0.4069 - val_acc: 0.8767
Epoch 49/50
 - 2679s - loss: 0.3549 - acc: 0.8930 - val_loss: 0.4031 - val_acc: 0.8776
Epoch 50/50
Epoch 00050: saving model to E:\Keras Models\testing_3\2018-01-18_50.hdf5
 - 2680s - loss: 0.3493 - acc: 0.8950 - val_loss: 0.4004 - val_acc: 0.8787

然而,如果我尝试在这个迷你训练会话之后加载模型(我将从epoch 00050加载模型,因此新的正则化器值应该已经实现,我会得到一个非常高的损失值)
代码:

path = r'E:\Keras Models\testing_3\2018-01-18_50.hdf5' #50th epoch model

from keras.models import load_model
model = load_model(path)
model.compile(optimizer=SGD(lr= .0001, momentum=0.90),
              loss='categorical_crossentropy',
              metrics = ['accuracy'])

返回:

Epoch 51/65
 - 3130s - loss: 14.0017 - acc: 0.8953 - val_loss: 13.9529 - val_acc: 0.8800
Epoch 52/65
Epoch 00052: saving model to E:\Keras Models\testing_3\2018-01-20_52.hdf5
 - 2813s - loss: 13.8017 - acc: 0.8969 - val_loss: 13.7553 - val_acc: 0.8812
Epoch 53/65
 - 2759s - loss: 13.6070 - acc: 0.8977 - val_loss: 13.5609 - val_acc: 0.8824
Epoch 54/65
Epoch 00054: saving model to E:\Keras Models\testing_3\2018-01-20_54.hdf5
 - 2748s - loss: 13.4115 - acc: 0.8992 - val_loss: 13.3697 - val_acc: 0.8824
Epoch 55/65
 - 2745s - loss: 13.2217 - acc: 0.9006 - val_loss: 13.1807 - val_acc: 0.8840
Epoch 56/65
Epoch 00056: saving model to E:\Keras Models\testing_3\2018-01-20_56.hdf5
 - 2752s - loss: 13.0335 - acc: 0.9014 - val_loss: 12.9951 - val_acc: 0.8840
Epoch 57/65
 - 2756s - loss: 12.8490 - acc: 0.9023 - val_loss: 12.8118 - val_acc: 0.8849
Epoch 58/65
Epoch 00058: saving model to E:\Keras Models\testing_3\2018-01-20_58.hdf5
 - 2749s - loss: 12.6671 - acc: 0.9032 - val_loss: 12.6308 - val_acc: 0.8849
Epoch 59/65
 - 2738s - loss: 12.4871 - acc: 0.9039 - val_loss: 12.4537 - val_acc: 0.8855
Epoch 60/65
Epoch 00060: saving model to E:\Keras Models\testing_3\2018-01-20_60.hdf5
 - 2765s - loss: 12.3086 - acc: 0.9059 - val_loss: 12.2778 - val_acc: 0.8868
Epoch 61/65
 - 2767s - loss: 12.1353 - acc: 0.9065 - val_loss: 12.1055 - val_acc: 0.8867
Epoch 62/65
Epoch 00062: saving model to E:\Keras Models\testing_3\2018-01-20_62.hdf5
 - 2757s - loss: 11.9637 - acc: 0.9061 - val_loss: 11.9351 - val_acc: 0.8883

请注意loss值非常高。这正常吗?我知道l2正则化器会带来损失(如果有大的权重),但这不会反映在第一个迷你训练阶段(我第一次实现正则化器的地方?))。不过准确性似乎保持一致。
谢谢你。

lyr7nygr

lyr7nygr1#

你需要做两件事:
1.按以下方式添加正则化器:

model.get_layer('Dense_1').kernel_regularizer = l2(0.01)

1.重新编译模型:

model.compile(...)
zfycwa2u

zfycwa2u2#

对于tensorflow 2.X,你只需要这样做:

l2 = tf.keras.regularizers.l2(1e-4)
for layer in model.layers:
    # if hasattr(layer, 'kernel'):
    # or
    # If you want to apply just on Conv
    if isinstance(layer, tf.keras.layers.Conv2D):
        model.add_loss(lambda layer=layer: l2(layer.kernel))

希望能有所帮助

5cg8jx4n

5cg8jx4n3#

Marcin的解决方案对我不起作用。如apatsekin所述,如果在添加Marcin建议的正则化器后打印layer.losses,您将得到一个空列表。
我发现了一个我根本不喜欢的变通方法,但我在这里发帖,以便更有能力的人可以找到一种更简单的方法来做到这一点。
我相信它适用于大多数keras.application网络。我从Github中的keras-application中复制了特定架构的.py文件(例如,InceptionResNetV2)到我机器中的本地文件regularizedNetwork.py。我不得不编辑它来修复一些相对导入,例如:

#old version
from . import imagenet_utils
from .imagenet_utils import decode_predictions
from .imagenet_utils import _obtain_input_shape

backend = None
layers = None
models = None
keras_utils = None

致:

#new version
from keras import backend
from keras import layers
from keras import models
from keras import utils as keras_utils

from keras.applications import imagenet_utils
from keras.applications.imagenet_utils import decode_predictions
from keras.applications.imagenet_utils import _obtain_input_shape

一旦解决了相对路径和导入问题,我就在每个所需的层中添加正则化器,就像定义一个新的未经训练的网络一样。通常,在定义架构之后,来自keras.application的模型加载预训练的权重。
现在,在main代码/notebook中,只需导入新的regularizedNetwork.py并调用main方法来示例化网络。

#main code
from regularizedNetwork import InceptionResNetV2

正则化器应该都设置好了,你可以正常地对正则化模型进行微调。
我相信有一个不那么花哨的方法来做到这一点,所以,请,如果有人发现它,写一个新的答案和/或评论在这个答案。
为了记录在案,我还尝试从keras.application示例化模型,用regModel = model.get_config()获得其架构,按照Marcin的建议添加正则化器,然后用regModel.set_weights(model.get_weights())加载权重,但它仍然不起作用。
编辑:拼写错误。

j9per5c4

j9per5c44#

试试这个:

# a utility function to add weight decay after the model is defined.
def add_weight_decay(model, weight_decay):
    if (weight_decay is None) or (weight_decay == 0.0):
        return

    # recursion inside the model
    def add_decay_loss(m, factor):
        if isinstance(m, tf.keras.Model):
            for layer in m.layers:
                add_decay_loss(layer, factor)
        else:
            for param in m.trainable_weights:
                with tf.keras.backend.name_scope('weight_regularizer'):
                    regularizer = lambda: tf.keras.regularizers.l2(factor)(param)
                    m.add_loss(regularizer)

    # weight decay and l2 regularization differs by a factor of 2
    add_decay_loss(model, weight_decay/2.0)
    return
o4tp2gmn

o4tp2gmn5#

这是一个有点hacky,但它应该工作。这适用于Tensorflow 2.0中的预训练模型。请注意,所有层都应该是model.layers,即将跳过嵌套的加权层。从这里选择解决方案https://sthalles.github.io/keras-regularizer/

import os
import tempfile

def add_regularization(model, regularizer=tf.keras.regularizers.l2(0.0001)):

    if not isinstance(regularizer, tf.keras.regularizers.Regularizer):
      print("Regularizer must be a subclass of tf.keras.regularizers.Regularizer")
      return model

    for layer in model.layers:
        for attr in ['kernel_regularizer']:
            if hasattr(layer, attr):
              setattr(layer, attr, regularizer)

    # When we change the layers attributes, the change only happens in the model config file
    model_json = model.to_json()

    # Save the weights before reloading the model.
    tmp_weights_path = os.path.join(tempfile.gettempdir(), 'tmp_weights.h5')
    model.save_weights(tmp_weights_path)

    # load the model from the config
    model = tf.keras.models.model_from_json(model_json)

    # Reload the model weights
    model.load_weights(tmp_weights_path, by_name=True)
    return model
toe95027

toe950276#

Horovod示例中的解决方法。这个想法是序列化模型,添加L2,然后将其恢复回来。

model_config = model.get_config()
for layer, layer_config in zip(model.layers, model_config['layers']):
    if hasattr(layer, 'kernel_regularizer'):
        regularizer = keras.regularizers.l2(args.wd)
        layer_config['config']['kernel_regularizer'] = \
            {'class_name': regularizer.__class__.__name__,
             'config': regularizer.get_config()}
    if type(layer) == keras.layers.BatchNormalization:
        layer_config['config']['momentum'] = 0.9
        layer_config['config']['epsilon'] = 1e-5

model = keras.models.Model.from_config(model_config)
e5nqia27

e5nqia277#

迭代InceptionV3的所有层

def apply_regularization(
    model: tf.keras.Model,
    l1_regularization: Optional[float],
    l2_regularization: Optional[float],
) -> tf.keras.Model:
    for layer in model.layers:
        if hasattr(layer, "kernel_regularizer"):
            if l1_regularization:
                layer.kernel_regularizer = tf.keras.regularizers.l1(l1_regularization)
            if l2_regularization:
                layer.kernel_regularizer = tf.keras.regularizers.l2(l2_regularization)
    return model

相关问题