pytorch 拥抱脸的分类头是什么AutoModelForTokenClassification模型

qv7cva1a  于 2023-04-06  发布在  其他
关注(0)|答案(1)|浏览(130)

我是一个拥抱脸和变压器的初学者,一直在试图弄清楚AutoModelForTokenClassification的分类头是什么?只是一个BiLSTM-CRF层还是别的什么?
一般来说,在哪里可以找到关于这些AutoModels的头部的详细信息?
我试着看了看文件,但什么也没找到。

pwuypxnk

pwuypxnk1#

AutoModel* 不是pytorch模型实现,它是一个实现的factory pattern。这意味着它根据提供的参数返回不同类的示例。例如:

from transformers import AutoModelForTokenClassification

m = AutoModelForTokenClassification.from_pretrained("roberta-base")
print(type(m))

输出:

<class 'transformers.models.roberta.modeling_roberta.RobertaForTokenClassification'>

你可以通过类的官方文档或参数来检查头部:

m.parameters

输出:

<bound method Module.parameters of RobertaForTokenClassification(
  (roberta): RobertaModel(
    (embeddings): RobertaEmbeddings(
      (word_embeddings): Embedding(50265, 768, padding_idx=1)
      (position_embeddings): Embedding(514, 768, padding_idx=1)
      (token_type_embeddings): Embedding(1, 768)
      (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
      (dropout): Dropout(p=0.1, inplace=False)
    )
    (encoder): RobertaEncoder(
      (layer): ModuleList(
        (0): RobertaLayer(
          (attention): RobertaAttention(
            (self): RobertaSelfAttention(
              (query): Linear(in_features=768, out_features=768, bias=True)
              (key): Linear(in_features=768, out_features=768, bias=True)
              (value): Linear(in_features=768, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
<... truncated ...>
        (11): RobertaLayer(
          (attention): RobertaAttention(
            (self): RobertaSelfAttention(
              (query): Linear(in_features=768, out_features=768, bias=True)
              (key): Linear(in_features=768, out_features=768, bias=True)
              (value): Linear(in_features=768, out_features=768, bias=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
            (output): RobertaSelfOutput(
              (dense): Linear(in_features=768, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (intermediate): RobertaIntermediate(
            (dense): Linear(in_features=768, out_features=3072, bias=True)
            (intermediate_act_fn): GELUActivation()
          )
          (output): RobertaOutput(
            (dense): Linear(in_features=3072, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
      )
    )
  )
  (dropout): Dropout(p=0.1, inplace=False)
  (classifier): Linear(in_features=768, out_features=2, bias=True)
)>

相关问题