pytorch 使用新标签微调模型的分类器层

lsmepo6l  于 2023-04-21  发布在  其他
关注(0)|答案(1)|浏览(167)

我想微调已经微调的BertForSequenceClassification模型,新的数据集只包含1个以前没有被模型看到的额外标签。
这样,我想在模型当前能够正确分类的标签集合中添加1个新标签。
此外,我不希望分类器权重被随机初始化,我希望保持它们的完整性,并根据数据集示例更新它们,同时将分类器层的大小增加1。
用于进一步微调的数据集可能如下所示:

sentece,label
intent example 1,new_label
intent example 2,new_label
...
intent example 10,new_label

我的模型的当前分类器层看起来像这样:

Linear(in_features=768, out_features=135, bias=True)

我怎么能做到呢?
这是个好方法吗?

watbbzwu

watbbzwu1#

您可以使用新值扩展模型的权重和偏差。请查看下面的注解示例:

#This is the section that loads your model
#I will just use an pretrained model for this example
import torch
from torch import nn
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("jpcorb20/toxic-detector-distilroberta")
model = AutoModelForSequenceClassification.from_pretrained("jpcorb20/toxic-detector-distilroberta")
#we check the output of one sample to compare it later with the extended layer
#to verify that we kept the previous learnt "knowledge"
f = tokenizer.encode_plus("This is an example", return_tensors='pt')
print(model(**f).logits)

#Now we need to find out the name of the linear layer you want to extend
#The layers on top of distilroberta are wrapped inside a classifier section
#This name can differ for you because it can be chosen randomly
#use model.parameters instead find the classification layer
print(model.classifier)

#The output shows us that the classification layer is called `out_proj`
#We can now extend the weights by creating a new tensor that consists of the
#old weights and a randomly initialized tensor for the new label 
model.classifier.out_proj.weight = nn.Parameter(torch.cat((model.classifier.out_proj.weight, torch.randn(1,768)),0))

#We do the same for the bias:
model.classifier.out_proj.bias = nn.Parameter(torch.cat((model.classifier.out_proj.bias, torch.randn(1)),0))

#and be happy when we compare the output with our expectation 
print(model(**f).logits)

输出:

tensor([[-7.3604, -9.4899, -8.4170, -9.7688, -8.4067, -9.3895]],
       grad_fn=<AddmmBackward>)
RobertaClassificationHead(
  (dense): Linear(in_features=768, out_features=768, bias=True)
  (dropout): Dropout(p=0.1, inplace=False)
  (out_proj): Linear(in_features=768, out_features=6, bias=True)
)
tensor([[-7.3604, -9.4899, -8.4170, -9.7688, -8.4067, -9.3895,  2.2124]],
       grad_fn=<AddmmBackward>)

请注意,你应该微调你的模型。新的权重是随机初始化的,因此会对性能产生负面影响。

相关问题