Keras自定义损耗函数,具有三个网络输入和model.add_loss

myzjeezk  于 2023-03-02  发布在  其他
关注(0)|答案(1)|浏览(127)

你好,我需要一些帮助,在Keras的自定义损失函数。我基本上建立了一个UNET与第二个输入,采取的权重Map一样,在原来的UNET文件。然而,我使用这个UNET的图像合成和我的损失函数是一个组合的感知损失和像素损失计算使用三个输入UNET模型是具有编码器、解码器和跳跃连接的标准UNET。
下面是我的代码为网络和损失函数:

def synthesis_unet_weights(pretrained_weights=None, input_shape=(SIZE_s, SIZE_s, 3), num_classes=1, is_training=True):
    
    ip        = Input(shape=input_shape)
    weight_ip = Input(shape=input_shape[:2] + (num_classes,))
    
    UNET encoder with the first Conv2D layer taking input ip
#---------------------------------------------------------------------------------------------------------------------------    
    center = Conv2D(1024, (3,3),padding='same', activation='relu', kernel_initializer=initializer)(pool4)
    center = Conv2D(1024, (3,3),padding='same', activation='relu', kernel_initializer=initializer)(center)
#---------------------------------------------------------------------------------------------------------------------------
    UNET decoder with the last layer up1
    
    classify = Conv2D(num_classes, (1,1), activation='sigmoid')(up1)
    
    if is_training:
               
        model=Model(inputs=[ip, weight_ip], outputs=[classify])
        model.add_loss(perceptual_loss_weight(ip,classify,weight_ip))
         
        return model
     
    else:
        
        model = Model(inputs=[ip], outputs=[classify])
        weight_ip=ip
        model.add_loss(perceptual_loss_weight(ip,classify,weight_ip))
        
        opt2 = tf.keras.optimizers.Adam(learning_rate=1e-3,clipnorm=1.0)
        model.compile(optimizer=opt2)
        
        return model        
    return model


def perceptual_loss_weight(input_image , reconstruct_image,  weights):
                       
    input_image       = clip_0_1(input_image)
    reconstruct_image = tf.concat((reconstruct_image,reconstruct_image,reconstruct_image),axis=-1)
    reconstruct_image = clip_0_1(reconstruct_image)
    weights = tf.concat((weights,weights,weights),axis=-1)
    weights = clip_0_1(weights)
            
    h1_list = LossModel(input_image)
    h2_list = LossModel(reconstruct_image)
   
    rc_loss = 0.0

    for h1, h2, weight in zip(h1_list, h2_list, selected_layer_weights):
        
           h1 = K.batch_flatten(h1)
           h2 = K.batch_flatten(h2)
                
           rc_loss = rc_loss + weight * K.sum(K.square(h1 - h2), axis=-1)
           
    
    pixel_loss = K.sum(K.square(K.batch_flatten(weights)*K.batch_flatten(input_image) - K.batch_flatten(weights)*K.batch_flatten(reconstruct_image)),axis=1) 
    return rc_loss+pixel_loss

权重输入仅用于训练过程中的损失函数。我设法训练了模型(编译时损失=无),但它没有预测它应该预测的内容。看起来输入只是通过网络(没有任何修改)直接传递到输出。重建的输出图像看起来与输入图像完全相同。

gk7wooem

gk7wooem1#

好了,我已经发现了概念上的错误,我将input_image input的输入图像提供给损失函数。但实际上这些应该是y_true标签。一个可能的解决方案是向网络提供额外的输入“ip_labels”,这是感知损失@ tf. functions所需的y_true。下面是一个工作解决方案,它被编写为带有虚拟损失层的自定义损失函数:

def MyLoss2(input_image, reconstruct_image, weight_ip):
@tf.function
def perceptual_loss(input_image, reconstruct_image):
    
    input_image       = clip_0_1(input_image)
    reconstruct_image = clip_0_1(reconstruct_image)
    weights = clip_0_1(weight_ip)

    h1_list = LossModel(input_image)
    h2_list = LossModel(reconstruct_image)

    rc_loss = 0.0

    for h1, h2, weight in zip(h1_list, h2_list, selected_layer_weights):
           h1 = K.batch_flatten(h1)
           h2 = K.batch_flatten(h2)
  
           rc_loss = rc_loss + weight * K.sum(K.square(h1 - h2), axis=-1)
           
    pixel_loss = K.sum(K.square(K.batch_flatten(weights)*K.batch_flatten(input_image) - K.batch_flatten(weights)*K.batch_flatten(reconstruct_image)),axis=1)
    return rc_loss + pixel_loss
return perceptual_loss(input_image, reconstruct_image)

损耗层实现如下:

class DummyLayer(Layer):

def __init__(self, is_training):
super().__init__(is_training)
self.is_training=is_training

def get_config(self):
  config = super().get_config()
  config.update({
      "is_training": self.is_training,
  })
  return config

def call(self, inputs,is_training):

ip,classify,weight_ip = tf.unstack(inputs,axis=-1)

self.add_loss(MyLoss2(ip, classify, weight_ip))

return inputs

合成UNET具有附加输入ip_labels:

def synthesis_unet_weights(pretrained_weights=None, input_shape=(SIZE_s, SIZE_s, 3), num_classes=1, is_training=True):

ip        = Input(shape=input_shape)

ip_labels = Input(shape=input_shape)

weight_ip = Input(shape=input_shape[:2] + (num_classes,))

down1 = Conv2D(64, (3,3),padding='same', activation=LeakyReLU(alpha=0.3), kernel_initializer=initializer)(ip)

UNET encoder with skips

center = Conv2D(1024, (3,3),padding='same', activation='relu', kernel_initializer=initializer)(pool4)
center = Conv2D(1024, (3,3),padding='same', activation='relu', kernel_initializer=initializer)(center)

UNET decoder with the last layer up1

classify = Conv2D(num_classes, (1,1), activation='sigmoid')(up1)

if is_training:
            
    data=tf.stack([ip_labels,(tf.concat((classify,classify,classify),axis=3)),
                   (tf.concat((weight_ip,weight_ip,weight_ip),axis=3))],axis=-1)
    
    
    classify = DummyLayer(is_training=True)(data, is_training=True)
    
    inp, classify, weight_inp=tf.unstack(classify,axis=-1)
    
    model=Model(inputs=[ip, ip_labels, weight_ip], outputs=[classify])

    opt = tf.keras.optimizers.Adam(learning_rate=1e-3,clipnorm=1.0)
    
    model.compile(optimizer=opt, metrics=['mse','mae'])      
    
    return model
 
else:
    
    data=tf.stack([ip,(tf.concat((classify,classify,classify),axis=3)), ip],axis=-1)
    
    classify=DummyLayer(is_training=False)(data, is_training=False)
    inp, classify, weight_inp=tf.unstack(classify,axis=-1)

    model = Model(inputs=[ip], outputs=[classify])

    opt = tf.keras.optimizers.Adam(learning_rate=1e-3,clipnorm=1.0)
    model.compile(optimizer=opt)
    
    return model
    
return model

培训:

model=synthesis_unet_weights()
model.fit([input_images, labels, weight_maps], labels)

我知道堆叠和非堆叠Tensor的代码不是特别优雅,但它是工作的。

相关问题