tokenizers BPE训练器忽略特殊标记,

inn6fuwd  于 5个月前  发布在  其他
关注(0)|答案(3)|浏览(119)

这是一个关于训练自定义分词器的问题。用户希望在汇编代码中实现跨指令的合并(可能涉及多个单词)。为了实现这个目标,用户将所有空格替换为一个虚拟标记(例如 "<space>"),并使用一个预分词器根据换行符进行分割。这个方法基本上可以工作,但在尝试添加特殊标记时遇到了问题。

以下是一个简单的示例来重现问题:

from tokenizers import Tokenizer, Regex
from tokenizers.models import BPE
from tokenizers.trainers import BpeTrainer
from tokenizers.pre_tokenizers import Sequence as PretokenizerSequence, Split
from tokenizers.normalizers import Sequence as NormalizerSequence, Replace, BertNormalizer, Strip

corpus_file = "corpus.txt"
special_tokens = [
"",
"",
"",
""
]
for i in range(20):
special_tokens.append(f"<disasm_function_{i}>")
special_tokens.append(f"<disasm_string_{i}>")

tokenizer = Tokenizer(BPE())
tokenizer.add_special_tokens(special_tokens)

tokenizer.normalizer = NormalizerSequence([
Strip(),
BertNormalizer(clean_text=True, strip_accents=True, lowercase=True),
Replace(Regex("\s{2,}"), " "),
Replace(" ", "")
])
tokenizer.pre_tokenizer = PretokenizerSequence([
Split("\n", behavior="removed")
])

trainer = BpeTrainer(
special_tokens=special_tokens, vocab_size=10000, min_frequency=2,
)
tokenizer.train(files=[corpus_file], trainer=trainer)

tokenizer.save("example_tokenizer.json")


An example segment of my corpus I am using to train will look something like:

lea rsi,<code_addr_1> <string_literal><disasm_string_0></string_literal>
mov edi, eax
call ::<function_name><disasm_function_1></function_name>
mov rax, qword ptr <local_var_0>
mov rdi, rax
call ::<function_name><disasm_function_2></function_name>
mov rax, qword ptr <local_var_0>
mov rax, qword ptr [rax]<unk_0>
mov rdi, rax
call ::<function_name><disasm_function_3></function_name>


so the aim is to ensure that e.g. <disasm_function_1> is always a single token. This works at test time (i.e. these special tokens are always tokenized as single tokens), but it's clearly not happening during the BPE training. If I examine the tokens/merges I am getting out, many of them contain the special tokens within them. E.g. from the resulting JSON file:

"</return_val><calling_conv>stdcall</calling_conv><func_name><disasm_function_0></func_name>(": 370,
"popr1": 371,
"call::<function_name><disasm_function_2></function_name>": 372,


you can see these learned tokens contain the special tokens within them.

这是预期的行为吗?用户的假设是BPE训练器会阻止这种情况发生(因为它提供了特殊标记的列表——否则为什么需要这个参数呢?),而且不希望用很多永远不会有效的合并来填充词汇表。

有没有办法阻止这种情况发生(或者有什么设置没有正确完成)?

编辑:
用户目前的解决方法是:

tokenizer.pre_tokenizer = PretokenizerSequence([
Split("\n", behavior="removed")
] + [Split(tok, behavior="isolated") for tok in special_tokens])

这种方法似乎有效,但可能不是最佳方法。

omtl5h9j

omtl5h9j1#

嘿!你在初始化normalizer之前添加了tokens,这对我有效:

from tokenizers import Tokenizer, Regex
from tokenizers.models import BPE
from tokenizers.trainers import BpeTrainer
from tokenizers.pre_tokenizers import Sequence as PretokenizerSequence, Split
from tokenizers.normalizers import Sequence as NormalizerSequence, Replace, BertNormalizer, Strip

corpus_file = "corpus.txt"
special_tokens = [
    "<s>",
    "<pad>",
    "</s>",
    "<unk>"
]
for i in range(20):
    special_tokens.append(f"<disasm_function_{i}>")
    special_tokens.append(f"<disasm_string_{i}>")

tokenizer = Tokenizer(BPE())
- tokenizer.add_special_tokens(special_tokens)

tokenizer.normalizer = NormalizerSequence([
    Strip(),
    BertNormalizer(clean_text=True, strip_accents=True, lowercase=True),
    Replace(Regex("\s{2,}"), " "),
    Replace(" ", "<space>")
])
tokenizer.pre_tokenizer = PretokenizerSequence([
    Split("\n", behavior="removed")
])
+ tokenizer.add_special_tokens(special_tokens)
trainer = BpeTrainer(
    special_tokens=special_tokens, vocab_size=10000, min_frequency=2,
)
tokenizer.train(files=[corpus_file], trainer=trainer)

tokenizer.save("example_tokenizer.json")
vd8tlhqk

vd8tlhqk2#

所以我尝试了这个,对我来说它仍然给出了完全相同的结果。它在测试时(就像之前的版本一样)工作,但在训练过程中,它仍然会跨越特殊标记进行合并。

nwlqm0z1

nwlqm0z13#

你是对的,抱歉。这里有一个修复PR,不确定为什么我们从未有过那个。

相关问题