我已经导入Vit-b32模型并对其进行微调,以执行回波图像分类任务。现在我想可视化注意力图,这样我就可以知道模型在执行分类任务时关注的是图像的哪一部分。但是我无法做到这一点,当我试图在微调模型后可视化注意力Map时,我得到了一个错误。下面是代码:
!pip install --quiet vit-keras
from vit_keras import vit
vit_model = vit.vit_b32(
image_size = IMAGE_SIZE,
activation = 'softmax',
pretrained = True,
include_top = False,
pretrained_top = False,
classes = 3)
当我尝试在没有任何微调的情况下可视化注意力Map时,它正在工作,没有任何错误:
from vit_keras import visualize
x = test_gen.next()
image = x[0]
attention_map = visualize.attention_map(model = vit_model, image = image)
# Plot results
fig, (ax1, ax2) = plt.subplots(ncols = 2)
ax1.axis('off')
ax2.axis('off')
ax1.set_title('Original')
ax2.set_title('Attention Map')
_ = ax1.imshow(image)
_ = ax2.imshow(attention_map)
现在,在下面的代码中,我已经向模型添加了一些分类层并对其进行了微调:
model = tf.keras.Sequential([
vit_model,
tf.keras.layers.Flatten(),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(11, activation = tfa.activations.gelu),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(3, 'softmax')
],
name = 'vision_transformer')
model.summary()
下面是上述单元格的输出:
> Model: "vision_transformer"
> _________________________________________________________________ Layer (type) Output Shape Param #
> ================================================================= vit-b32 (Functional) (None, 768) 87455232
> _________________________________________________________________ flatten_1 (Flatten) (None, 768) 0
> _________________________________________________________________ batch_normalization_2 (Batch (None, 768) 3072
> _________________________________________________________________ dense_2 (Dense) (None, 11) 8459
> _________________________________________________________________ batch_normalization_3 (Batch (None, 11) 44
> _________________________________________________________________ dense_3 (Dense) (None, 3) 36
> ================================================================= Total params: 87,466,843 Trainable params: 87,465,285 Non-trainable
> params: 1,558
> _________________________________________________________________
现在我已经在我自己的医疗数据集上训练了模型:
learning_rate = 1e-4
optimizer = tfa.optimizers.RectifiedAdam(learning_rate = learning_rate)
model.compile(optimizer = optimizer,
loss = tf.keras.losses.CategoricalCrossentropy(label_smoothing = 0.2),
metrics = ['accuracy'])
STEP_SIZE_TRAIN = train_gen.n // train_gen.batch_size
STEP_SIZE_VALID = valid_gen.n // valid_gen.batch_size
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor = 'val_accuracy',
factor = 0.2,
patience = 2,
verbose = 1,
min_delta = 1e-4,
min_lr = 1e-6,
mode = 'max')
earlystopping = tf.keras.callbacks.EarlyStopping(monitor = 'val_accuracy',
min_delta = 1e-4,
patience = 5,
mode = 'max',
restore_best_weights = True,
verbose = 1)
checkpointer = tf.keras.callbacks.ModelCheckpoint(filepath = './model.hdf5',
monitor = 'val_accuracy',
verbose = 1,
save_best_only = True,
save_weights_only = True,
mode = 'max')
callbacks = [earlystopping, reduce_lr, checkpointer]
model.fit(x = train_gen,
steps_per_epoch = STEP_SIZE_TRAIN,
validation_data = valid_gen,
validation_steps = STEP_SIZE_VALID,
epochs = EPOCHS,
callbacks = callbacks)
model.save('model.h5', save_weights_only = True)
训练后,当我试图可视化模型的注意力Map时,它显示错误:
from vit_keras import visualize
x = test_gen.next()
image = x[0]
attention_map = visualize.attention_map(model = model, image = image)
# Plot results
fig, (ax1, ax2) = plt.subplots(ncols = 2)
ax1.axis('off')
ax2.axis('off')
ax1.set_title('Original')
ax2.set_title('Attention Map')
_ = ax1.imshow(image)
_ = ax2.imshow(attention_map)
下面是以下错误:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-13-f208f2d2b771> in <module>
4 image = x[0]
5
----> 6 attention_map = visualize.attention_map(model = model, image = image)
7
8 # Plot results
/opt/conda/lib/python3.7/site-packages/vit_keras/visualize.py in attention_map(model, image)
14 """
15 size = model.input_shape[1]
---> 16 grid_size = int(np.sqrt(model.layers[5].output_shape[0][-2] - 1))
17
18 # Prepare the input
TypeError: 'NoneType' object is not subscriptable
请提出一些方法来纠正上述错误,并可视化微调模型的注意力Map
3条答案
按热度按时间uxhixvfz1#
你可以通过以下步骤来实现注意力Map的可视化。
由于attention_map假设ViT模型作为模型参数,因此需要指定定义为tf.keras.Sequential的微调模型的第一个元素。
z8dt9xmd2#
我有个解决办法
我有一个字符串中的图像路径,用OpenCv库打开它,我previosly加载一个微调的ViT模型。
我认为你只需要使用方法
get_layer
,并选择你的Vit,因为你完全在你的顺序模型中使用它,它作为一个层工作。2nc8po8w3#
我试图将它应用到我的模型和数据集中。然而,我的注意力Map总是黑色,我无法修复它。有谁知道这个错误可能是什么吗?