python 未实现错误:不再支持lemmatize参数

0yg35tkg  于 2023-02-21  发布在  Python
关注(0)|答案(1)|浏览(1069)

我已经运行了我自己的类似的gpt2模型的代码,但下面的错误得到了它.如何解决这个实现错误在python.

corpus = WikiCorpus(file_path, lemmatize=False, lower=False, tokenizer_func=tokenizer_func)
  File "C:\Rayi\python\text-generate\text-gene\lib\site-packages\gensim\corpora\wikicorpus.py", line 619, in __init__
    raise NotImplementedError(
NotImplementedError: The lemmatize parameter is no longer supported. If you need to lemmatize, use e.g. <https://github.com/clips/pattern>. Perform lemmatization as part of your tokenization function and pass it as the tokenizer_func parameter to this initializer.
import tensorflow as tf
from gensim.corpora import WikiCorpus
import os
import argparse

# lang = 'bn'

def store(corpus, lang):
    base_path = os.getcwd()
    store_path = os.path.join(base_path, '{}_corpus'.format(lang))
    if not os.path.exists(store_path):
        os.mkdir(store_path)
    file_idx=1
    for text in corpus.get_texts():
        current_file_path = os.path.join(store_path, 'article_{}.txt'.format(file_idx))
        with open(current_file_path, 'w' , encoding='utf-8') as file:
            file.write(bytes(' '.join(text), 'utf-8').decode('utf-8'))
        #endwith
        file_idx += 1
    #endfor

def tokenizer_func(text: str, token_min_len: int, token_max_len: int, lower: bool) -> list:
    return [token for token in text.split() if token_min_len <= len(token) <= token_max_len]

def run(lang):
    origin='https://dumps.wikimedia.org/{}wiki/latest/{}wiki-latest-pages-articles.xml.bz2'.format(lang,lang)
    fname='{}wiki-latest-pages-articles.xml.bz2'.format(lang)
    file_path = tf.keras.utils.get_file(origin=origin, fname=fname, untar=False, extract=False)
    corpus = WikiCorpus(file_path, lemmatize=True, lower=False, tokenizer_func=tokenizer_func)
    store(corpus, lang)

if __name__ == '__main__':
    ARGS_PARSER = argparse.ArgumentParser()
    ARGS_PARSER.add_argument(
        '--lang',
        default='en',
        type=str,
        help='language code to download from wikipedia corpus'
    )
    ARGS = ARGS_PARSER.parse_args()
    run(**vars(ARGS))
7vux5j2d

7vux5j2d1#

我找不到这个函数的任何实际文档,只是一些示例页面。
我只是打电话:

corpus = WikiCorpus(file_path, tokenizer_func=tokenizer_func)

如果你仍然想进行lemmatization,调用tokenizer_func中的一些lemmatization函数,如错误消息中所述。
现在等待大约8小时进行处理:D

相关问题