python Keras/tensorflow :单一输出的组合损失函数

ds97pgxw  于 2023-01-11  发布在  Python
关注(0)|答案(5)|浏览(153)

我的模型只有一个输出,但我想组合两个不同的损失函数:

def get_model():
    # create the model here
    model = Model(inputs=image, outputs=output)

    alpha = 0.2
    model.compile(loss=[mse, gse],
                      loss_weights=[1-alpha, alpha]
                      , ...)

但它抱怨我需要两个输出,因为我定义了两个损耗:

ValueError: When passing a list as loss, it should have one entry per model outputs. 
The model has 1 outputs, but you passed loss=[<function mse at 0x0000024D7E1FB378>, <function gse at 0x0000024D7E1FB510>]

我是否可以在不创建另一个损失函数的情况下写出最终的损失函数(因为这会限制我在损失函数之外改变α)?

    • 如何执行类似(1-alpha)*mse + alpha*gse的操作**

更新:
我的两个损失函数都等价于任何内置keras损失函数的函数签名,接受y_truey_pred,并返回损失的Tensor(可以使用K.mean()将其还原为标量),但我相信,只要这些损失函数返回有效的损失,如何定义这些损失函数应该不会影响答案。

def gse(y_true, y_pred):
    # some tensor operation on y_pred and y_true
    return K.mean(K.square(y_pred - y_true), axis=-1)
lyr7nygr

lyr7nygr1#

为损失指定自定义函数:

model = Model(inputs=image, outputs=output)

alpha = 0.2
model.compile(
    loss=lambda y_true, y_pred: (1 - alpha) * mse(y_true, y_pred) + alpha * gse(y_true, y_pred),
    ...)

或者如果你不想让一个难看的lambda变成一个实际的函数:

def my_loss(y_true, y_pred):
    return (1 - alpha) * mse(y_true, y_pred) + alpha * gse(y_true, y_pred)

model = Model(inputs=image, outputs=output)

alpha = 0.2
model.compile(loss=my_loss, ...)
  • 编辑:*

如果你的alpha不是某个全局常量,你可以有一个“损失函数工厂”:

def make_my_loss(alpha):
    def my_loss(y_true, y_pred):
        return (1 - alpha) * mse(y_true, y_pred) + alpha * gse(y_true, y_pred)
    return my_loss

model = Model(inputs=image, outputs=output)

alpha = 0.2
my_loss = make_my_loss(alpha)
model.compile(loss=my_loss, ...)
8yoxcaq7

8yoxcaq72#

是的,定义您自己的自定义损耗函数,并在编译时将其传递给loss参数:

def custom_loss(y_true, y_pred):
    return (1-alpha) * K.mean(K.square(y_true-y_pred)) + alpha * gse

(Not当然你指的是gse)。看看普通损失是如何在Keras实现的会很有帮助:https://github.com/keras-team/keras/blob/master/keras/losses.py

bvjveswy

bvjveswy3#

loss函数应为一个函数。您为模型提供了一个包含两个函数的列表
尝试:

def mse(y_true, y_pred):
    return K.mean(K.square(y_pred - y_true), axis=-1)

model.compile(loss= (mse(y_true, y_pred)*(1-alpha) + gse(y_true, y_pred)*alpha),
              , ...)
mcdcgff0

mcdcgff04#

这并不是说这个答案特别解决了最初的问题,我之所以想写这个答案,是因为当尝试使用keras.models.load_model加载一个有自定义损耗的keras模型时,会发生同样的错误,而且在任何地方都没有正确的答案。具体来说,按照keras github repository中的VAE示例代码,在使用model.save保存后加载VAE模型时会发生这个错误。
解决方案是使用vae.save_weights('file.h5')只保存权重,而不是保存整个模型。但是,在使用vae.load_weights('file.h5')加载权重之前,您必须再次构建和编译模型。
以下是一个示例实现。

class VAE():
    def build_model(self): # latent_dim and intermediate_dim can be passed as arguments
        def sampling(args):
            """Reparameterization trick by sampling from an isotropic unit Gaussian.
            # Arguments
                args (tensor): mean and log of variance of Q(z|X)
            # Returns
                z (tensor): sampled latent vector
            """

            z_mean, z_log_var = args
            batch = K.shape(z_mean)[0]
            dim = K.int_shape(z_mean)[1]
            # by default, random_normal has mean = 0 and std = 1.0
            epsilon = K.random_normal(shape=(batch, dim))
            return z_mean + K.exp(0.5 * z_log_var) * epsilon

        # original_dim = self.no_features
        # intermediate_dim = 256
        latent_dim = 8
        inputs = Input(shape=(self.no_features,))
        x = Dense(256, activation='relu')(inputs)
        x = Dense(128, activation='relu')(x)
        x = Dense(64, activation='relu')(x)
        z_mean = Dense(latent_dim, name='z_mean')(x)
        z_log_var = Dense(latent_dim, name='z_log_var')(x)
        # use reparameterization trick to push the sampling out as input
        # note that "output_shape" isn't necessary with the TensorFlow backend
        z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])
        # instantiate encoder model
        encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')

        # build decoder model
        latent_inputs = Input(shape=(latent_dim,), name='z_sampling')
        x = Dense(32, activation='relu')(latent_inputs)
        x = Dense(48, activation='relu')(x)
        x = Dense(64, activation='relu')(x)
        outputs = Dense(self.no_features, activation='linear')(x)

        # instantiate decoder model
        decoder = Model(latent_inputs, outputs, name='decoder')

        # instantiate VAE model
        outputs = decoder(encoder(inputs)[2])
        VAE = Model(inputs, outputs, name='vae_mlp')
        reconstruction_loss = mse(inputs, outputs)
        reconstruction_loss *= self.no_features
        kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
        kl_loss = K.sum(kl_loss, axis=-1)
        kl_loss *= -0.5
        vae_loss = K.mean(reconstruction_loss + kl_loss)
        VAE.add_loss(vae_loss)
        VAE.compile(optimizer='adam')
        return VAE

现在,

vae_cls = VAE()
vae = vae_cls.build_model()
# vae.fit()
vae.save_weights('file.h5')

加载模型和预测(如果在不同的脚本中,则需要导入VAE类),

vae_cls = VAE()
vae = vae_cls.build_model()
vae.load_weights('file.h5')
# vae.predict()

最后,差异:[ ref ]
凯拉斯model.save节省,
1.模型权重
1.模型架构
1.模型编制详情(损失函数和指标)
1.模型优化器和正则化器状态
Keras model.save_weights仅保存模型权重。Keras model.to_json()保存模型架构。
希望这有助于有人试验变分自动编码器。

66bbxpm5

66bbxpm55#

MAERMSE合并在一起:

import tensorflow as tf
from tensorflow import keras

def loss_fn_mae_rmse(y_true, y_pred, alpha=0.8):
    mae = keras.losses.MeanAbsoluteError()
    mse = keras.losses.MeanSquaredError()
    return alpha * mae(y_true, y_pred) + (1 - alpha) * tf.sqrt(mse(y_true, y_pred))

model = keras.Model(inputs=..., outputs=...)
opt = keras.optimizers.Adam(learning_rate=1e-4)
model.compile(optimizer=opt, loss=loss_fn_mae_rmse, metrics=['mae'])

同时,如果要在训练后加载此模型并保存到磁盘:

model = keras.models.load_model('path/to/model.h5', custom_objects={'loss_fn_mae_rmse': loss_fn_mae_rmse})

相关问题