在tensorflow迁移学习教程中model.evaluate()函数是如何实现的?

5tmbdcev  于 2021-07-13  发布在  Java
关注(0)|答案(0)|浏览(821)

在tensorflow网站上学习了迁移学习教程之后,我有一个问题,关于model.evaluate()与手工计算精度相比是如何工作的。
最后,在微调之后,在评估和预测部分,我们使用 model.evaluate() 计算试验装置的精度如下:

loss, accuracy = model.evaluate(test_dataset)
print('Test accuracy :', accuracy)
6/6 [==============================] - 2s 217ms/step - loss: 0.0516 - accuracy: 0.9740
Test accuracy : 0.9739583134651184

接下来,作为可视化练习的一部分,我们从测试集中的一批图像中手动生成预测:


# Apply a sigmoid since our model returns logits

predictions = tf.nn.sigmoid(predictions)
predictions = tf.where(predictions < 0.5, 0, 1)

但是,也可以扩展此功能来计算整个测试集的预测值,并将其与实际值进行比较,以获得平均精度:

all_acc=tf.zeros([], tf.int32) #initialize array to hold all accuracy indicators (single element)
for image_batch, label_batch in test_dataset.as_numpy_iterator():
    predictions = model.predict_on_batch(image_batch).flatten() #run batch through model and return logits
    predictions = tf.nn.sigmoid(predictions) #apply sigmoid activation function to transform logits to [0,1]
    predictions = tf.where(predictions < 0.5, 0, 1) #round down or up accordingly since it's a binary classifier
    accuracy = tf.where(tf.equal(predictions,label_batch),1,0) #correct is 1 and incorrect is 0
    all_acc = tf.experimental.numpy.append(all_acc, accuracy)
all_acc = all_acc[1:]  #drop first placeholder element
avg_acc = tf.math.reduce_mean(tf.dtypes.cast(all_acc, tf.float16)) 
print('My Accuracy:', avg_acc.numpy()) 
My Accuracy: 0.974

现在,如果 model.evaluate() 通过将sigmoid应用于logit模型输出并使用阈值0.5(如教程所示)生成预测,手动计算的精度应等于tensorflow的精度输出 model.evaluate() 功能。本教程确实如此。我的准确度:0.974=来自 model.evaluate() 功能。然而,当我使用与教程相同的卷积基训练的模型尝试相同的代码,但是使用不同的图像(不是像教程那样的猫和狗)时,我的精确度不再等于 model.evaluate() 准确度:

current_set = set9 #define set to process. must do all nine, one at a time
all_acc=tf.zeros([], tf.int32) #initialize array to hold all accuracy indicators (single element)
loss, acc = model.evaluate(current_set) #now test the model's performance on the test set
for image_batch, label_batch in current_set.as_numpy_iterator():
    predictions = model.predict_on_batch(image_batch).flatten() #run batch through model and return logits
    predictions = tf.nn.sigmoid(predictions) #apply sigmoid activation function to transform logits to [0,1]
    predictions = tf.where(predictions < 0.5, 0, 1) #round down or up accordingly since it's a binary classifier
    accuracy = tf.where(tf.equal(predictions,label_batch),1,0) #correct is 1 and incorrect is 0
    all_acc = tf.experimental.numpy.append(all_acc, accuracy)
all_acc = all_acc[1:]  #drop first placeholder element
avg_acc = tf.math.reduce_mean(tf.dtypes.cast(all_acc, tf.float16))
print('My Accuracy:', avg_acc.numpy()) 
print('Tf Accuracy:', acc) 
My Accuracy: 0.7183
Tf Accuracy: 0.6240000128746033

有人知道为什么会有差异吗?model.evaluate()不使用sigmoid吗?或者它使用的阈值与0.5不同?或者是我没有考虑的其他事情?
提前感谢您的帮助!

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题