pytorch DeBERTa ONNX导出不适用于token_type_ids

mutmk8jj  于 2023-05-17  发布在  其他
关注(0)|答案(1)|浏览(292)

我注意到this PR已经添加了对DeberTa的支持,这种模型可以导出为ONNX。我仔细阅读了PR,检查了所有可能的东西。但是,我不能使下面的代码工作。

from transformers import AutoTokenizer, AutoConfig, DebertaTokenizerFast, pipeline, DebertaV2Tokenizer, __version__
from optimum.onnxruntime import ORTModelForTokenClassification, ORTModelForQuestionAnswering

tokenizer = AutoTokenizer.from_pretrained("{custom_fine_tuned_NER_DeBERTaV2}")
model = ORTModelForTokenClassification.from_pretrained("{custom_fine_tuned_NER_DeBERTaV2}", 
                                                                                  export=True, use_auth_token=True)

pipe = pipeline("ner", model=model, tokenizer=tokenizer)
pipe("MY TEXT GOES HERE")

它会失败,并显示以下堆栈跟踪-

---------------------------------------------------------------------------
InvalidArgument                           Traceback (most recent call last)
Cell In[25], line 1
----> 1 pipe("I am a skilled engineer. I have worked in JS, CPP, Java, J2ME, and Python. I know Oracle and MySQL")

File ~/skill_extraction/skill_extraction/lib/python3.10/site-packages/transformers/pipelines/token_classification.py:214, in TokenClassificationPipeline.__call__(self, inputs, **kwargs)
    211 if offset_mapping:
    212     kwargs["offset_mapping"] = offset_mapping
--> 214 return super().__call__(inputs, **kwargs)

File ~/skill_extraction/skill_extraction/lib/python3.10/site-packages/transformers/pipelines/base.py:1109, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs)
   1101     return next(
   1102         iter(
   1103             self.get_iterator(
   (...)
   1106         )
   1107     )
   1108 else:
-> 1109     return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)

File ~/skill_extraction/skill_extraction/lib/python3.10/site-packages/transformers/pipelines/base.py:1116, in Pipeline.run_single(self, inputs, preprocess_params, forward_params, postprocess_params)
   1114 def run_single(self, inputs, preprocess_params, forward_params, postprocess_params):
   1115     model_inputs = self.preprocess(inputs, **preprocess_params)
-> 1116     model_outputs = self.forward(model_inputs, **forward_params)
   1117     outputs = self.postprocess(model_outputs, **postprocess_params)
   1118     return outputs

File ~/skill_extraction/skill_extraction/lib/python3.10/site-packages/transformers/pipelines/base.py:1015, in Pipeline.forward(self, model_inputs, **forward_params)
   1013     with inference_context():
   1014         model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device)
-> 1015         model_outputs = self._forward(model_inputs, **forward_params)
   1016         model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device("cpu"))
   1017 else:

File ~/skill_extraction/skill_extraction/lib/python3.10/site-packages/transformers/pipelines/token_classification.py:240, in TokenClassificationPipeline._forward(self, model_inputs)
    238     logits = self.model(model_inputs.data)[0]
    239 else:
--> 240     output = self.model(**model_inputs)
    241     logits = output["logits"] if isinstance(output, dict) else output[0]
    243 return {
    244     "logits": logits,
    245     "special_tokens_mask": special_tokens_mask,
   (...)
    248     **model_inputs,
    249 }

File ~/skill_extraction/skill_extraction/lib/python3.10/site-packages/optimum/modeling_base.py:85, in OptimizedModel.__call__(self, *args, **kwargs)
     84 def __call__(self, *args, **kwargs):
---> 85     return self.forward(*args, **kwargs)

File ~/skill_extraction/skill_extraction/lib/python3.10/site-packages/optimum/onnxruntime/modeling_ort.py:1363, in ORTModelForTokenClassification.forward(self, input_ids, attention_mask, token_type_ids, **kwargs)
   1360     onnx_inputs["token_type_ids"] = token_type_ids
   1362 # run inference
-> 1363 outputs = self.model.run(None, onnx_inputs)
   1364 logits = outputs[self.output_names["logits"]]
   1366 if use_torch:

File ~/skill_extraction/skill_extraction/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:200, in Session.run(self, output_names, input_feed, run_options)
    198     output_names = [output.name for output in self._outputs_meta]
    199 try:
--> 200     return self._sess.run(output_names, input_feed, run_options)
    201 except C.EPFail as err:
    202     if self._enable_fallback:

InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:token_type_ids

我不知道我做错了什么。同样的代码也适用于其他模型(例如另一个基于BERT的模型,然后由我进行微调)
我正在使用tokenzier版本0.13.3变压器版本4.27.4和最佳版本1.7.3我在EC2(AWS)中使用基于AMD的机器
请帮助,因为这是阻止我优化模型。我找不到任何相关文件。

owfi6suc

owfi6suc1#

我也有同样的错误。解决方法是在标记文本时删除“token_type_ids”,但只保留“input_ids”和“attention_mask”

tokenizer.model_input_names = ['input_ids', 'attention_mask']

相关问题