你好,
我尝试生成与你提供的有趣示例类似的输出,但仍然遇到问题。
我在Ubuntu 18.04上使用默认的'unilmv1-large-cased.bin',从github链接,使用以下命令:
python3 decode_seq2seq.py --bert_model bert-large-cased --model_recover_path storage/unilmv1-large-cased.bin --new_segment_ids --mode l2r --input_file test.txt --max_seq_length 256 --max_tgt_length 128 --batch_size 16 --forbid_duplicate_ngrams --temperature 1.0 --length_penalty 0 --min_len 500 --top_k 40 --output_file test_unilm_mode-l2r-topk-min_length500.txt
test.txt是论文中的单行内容:
Winston sat back. A sense of complete helplessness had descended upon him.
结果:
“ We ’ re going to be in the world ! ” He began to recount what he had learned . “ The world is a wreck , but here you stand in it , there are so many people who will not know it . And you are a damn coward , not to mention you are a little frightened , you are still scavenged . You are just a little unattracworthy that I ’ m going to take care of you . ” Worried that you were misunderstand , he shook his head . “ I mean , I have never taught you to learn the hard way , but if I
我已经按照你的指示操作,但是找不到"...使用masked-LM的方式对分布进行采样。"的选项。
此外,也无法找到增加输出长度的选项,使其与你的示例相匹配。--min_len 500没有效果。
我猜想我在上面的终端命令中可能遗漏了选项?
如果你能提供缺失的选项,我会非常感激。
谢谢!
1条答案
按热度按时间dldeef671#
也对Above感兴趣