如何计算Keras中的F1 Macro?

cotxawn7  于 2022-12-29  发布在  Mac
关注(0)|答案(6)|浏览(188)

我试过在Keras被删除之前使用它们提供的代码。代码如下:

def precision(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())
    return precision

def recall(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
    recall = true_positives / (possible_positives + K.epsilon())
    return recall

def fbeta_score(y_true, y_pred, beta=1):
    if beta < 0:
        raise ValueError('The lowest choosable beta is zero (only precision).')

    # If there are no true positives, fix the F score at 0 like sklearn.
    if K.sum(K.round(K.clip(y_true, 0, 1))) == 0:
        return 0

    p = precision(y_true, y_pred)
    r = recall(y_true, y_pred)
    bb = beta ** 2
    fbeta_score = (1 + bb) * (p * r) / (bb * p + r + K.epsilon())
    return fbeta_score

def fmeasure(y_true, y_pred):
    return fbeta_score(y_true, y_pred, beta=1)

从我所看到的情况来看,他们似乎使用了正确的公式。但是,当我尝试在训练过程中将其用作度量时,我得到的瓦尔_accuracy、val_precision、val_recall和val_fmeasure的输出完全相等。我确实相信即使公式正确也可能发生这种情况,但我认为不太可能。对此问题有什么解释吗?

myzjeezk

myzjeezk1#

因为Keras 2.0去掉了f1、precision和recall指标。解决方案是使用一个自定义指标函数:

from keras import backend as K

def f1(y_true, y_pred):
    def recall(y_true, y_pred):
        """Recall metric.

        Only computes a batch-wise average of recall.

        Computes the recall, a metric for multi-label classification of
        how many relevant items are selected.
        """
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
        recall = true_positives / (possible_positives + K.epsilon())
        return recall

    def precision(y_true, y_pred):
        """Precision metric.

        Only computes a batch-wise average of precision.

        Computes the precision, a metric for multi-label classification of
        how many selected items are relevant.
        """
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
        precision = true_positives / (predicted_positives + K.epsilon())
        return precision
    precision = precision(y_true, y_pred)
    recall = recall(y_true, y_pred)
    return 2*((precision*recall)/(precision+recall+K.epsilon()))

model.compile(loss='binary_crossentropy',
          optimizer= "adam",
          metrics=[f1])

此函数的返回行

return 2*((precision*recall)/(precision+recall+K.epsilon()))

通过添加常数epsilon进行修改,以避免除以0。因此,将不计算NaN。

gopyfrb3

gopyfrb32#

使用Keras度量函数不是计算F1或AUC或类似值的正确方法。
其原因是在验证的每个批步骤都调用了指标函数。这样Keras系统计算批结果的平均值。这不是正确的F1分数。
这就是为什么F1得分被从度量函数中删除的原因。

正确的方法是使用自定义回调函数,如下所示:

o4tp2gmn

o4tp2gmn3#

这是我使用子类化创建的流媒体自定义f1_score指标。它适用于TensorFlow 2.0测试版,但我还没有在其他版本上试用过。它的作用是跟踪真阳性、预测阳性、以及在整个时期内所有可能的阳性,然后在时期结束时计算F1分数。我认为其他答案只给出了每个批次的f1得分,当我们真正想要所有数据的f1得分时,这并不是最佳指标。
我得到了Aurélien Geron新书《使用Scikit-Learn & Tensorflow 2.0进行机器学习》的原始未经编辑副本,强烈推荐它。这就是我如何学习如何使用子类自定义f1指标的方法。这无疑是我见过的最全面的TensorFlow书籍。TensorFlow是一本非常痛苦的书,这家伙奠定了编码基础,学习了很多东西。
FYI:在Metrics中,我必须在f1_score()中放入括号,否则它将无法工作。
管道安装tensorflow 量==2.0.0-beta1

from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow import keras
import numpy as np

def create_f1():
    def f1_function(y_true, y_pred):
        y_pred_binary = tf.where(y_pred>=0.5, 1., 0.)
        tp = tf.reduce_sum(y_true * y_pred_binary)
        predicted_positives = tf.reduce_sum(y_pred_binary)
        possible_positives = tf.reduce_sum(y_true)
        return tp, predicted_positives, possible_positives
    return f1_function

class F1_score(keras.metrics.Metric):
    def __init__(self, **kwargs):
        super().__init__(**kwargs) # handles base args (e.g., dtype)
        self.f1_function = create_f1()
        self.tp_count = self.add_weight("tp_count", initializer="zeros")
        self.all_predicted_positives = self.add_weight('all_predicted_positives', initializer='zeros')
        self.all_possible_positives = self.add_weight('all_possible_positives', initializer='zeros')

    def update_state(self, y_true, y_pred,sample_weight=None):
        tp, predicted_positives, possible_positives = self.f1_function(y_true, y_pred)
        self.tp_count.assign_add(tp)
        self.all_predicted_positives.assign_add(predicted_positives)
        self.all_possible_positives.assign_add(possible_positives)

    def result(self):
        precision = self.tp_count / self.all_predicted_positives
        recall = self.tp_count / self.all_possible_positives
        f1 = 2*(precision*recall)/(precision+recall)
        return f1

X = np.random.random(size=(1000, 10))     
Y = np.random.randint(0, 2, size=(1000,))
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)

model = keras.models.Sequential([
    keras.layers.Dense(5, input_shape=[X.shape[1], ]),
    keras.layers.Dense(1, activation='sigmoid')
])

model.compile(loss='binary_crossentropy', optimizer='SGD', metrics=[F1_score()])

history = model.fit(X_train, y_train, epochs=5, validation_data=(X_test, y_test))
ldioqlga

ldioqlga4#

正如@Diesche提到的,以这种方式实现f1_score的主要问题是,它在每个批处理步骤都被调用,导致的结果比其他任何事情都更混乱。
我已经为这个问题纠结了一段时间,但最终通过使用回调函数解决了这个问题:在一个时期结束时,回调函数使用新的模型参数对数据进行预测(在本例中,我选择只将其应用于我的验证数据),并为您提供在整个时期评估的一致度量。
我在python3上使用的是tensorflow GPU(1.14.0)

from tensorflow.python.keras.models import Sequential, Model
from sklearn.metrics import  f1_score
from tensorflow.keras.callbacks import Callback
from tensorflow.python.keras import optimizers


optimizer = optimizers.SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=['accuracy'])
model.summary()

class Metrics(Callback):
    def __init__(self, model, valid_data, true_outputs):
        super(Callback, self).__init__()
        self.model=model
        self.valid_data=valid_data    #the validation data I'm getting metrics on
        self.true_outputs=true_outputs    #the ground truth of my validation data
        self.steps=len(self.valid_data)

    def on_epoch_end(self, args,*kwargs):
        gen=generator(self.valid_data)     #generator yielding the validation data
        val_predict = (np.asarray(self.model.predict(gen, batch_size=1, verbose=0, steps=self.steps)))

        """
        The function from_proba_to_output is used to transform probabilities  
        into an understandable format by sklearn's f1_score function
        """
        val_predict=from_proba_to_output(val_predict, 0.5)
        _val_f1 = f1_score(self.true_outputs, val_predict)
        print ("val_f1: ", _val_f1, "   val_precision: ", _val_precision, "   _val_recall: ", _val_recall)

函数from_proba_to_output如下所示:

def from_proba_to_output(probabilities, threshold):
    outputs = np.copy(probabilities)
    for i in range(len(outputs)):

        if (float(outputs[i])) > threshold:
            outputs[i] = int(1)
        else:
            outputs[i] = int(0)
    return np.array(outputs)

然后,我在fit_generator的回调部分引用这个metrics类来训练我的模型,我没有详细介绍train_generator和valid_generator的实现,因为这些数据生成器特定于当前的分类问题,发布它们只会带来混乱。

model.fit_generator(
train_generator, epochs=nbr_epochs, verbose=1, validation_data=valid_generator, callbacks=[Metrics(model, valid_data)])
zlhcx6iw

zlhcx6iw5#

正如@Pedia在上面的评论中所说的那样,on_epoch_end,正如github.com/fchollet/keras/issues/5400中所说的那样,是最好的方法。

cwxwcias

cwxwcias6#

我还建议使用此变通方法

  • 由ybubnov安装keras_metrics软件包
  • 在for循环内调用model.fit(nb_epoch=1, ...),利用每个历元后输出的精度/召回率指标

大概是这样的

for mini_batch in range(epochs):
        model_hist = model.fit(X_train, Y_train, batch_size=batch_size, epochs=1,
                            verbose=2, validation_data=(X_val, Y_val))

        precision = model_hist.history['val_precision'][0]
        recall = model_hist.history['val_recall'][0]
        f_score = (2.0 * precision * recall) / (precision + recall)
        print 'F1-SCORE {}'.format(f_score)

相关问题