Paddle paddlenlp用flask部署时predict报错

g2ieeal7  于 2021-11-30  发布在  Java
关注(0)|答案(13)|浏览(333)
  1. 配置:
    cpu推理。
    mac系统,docker部署,python的flask服务部署。
    paddle版本:2.1.1 。 paddlenlp版本: 2.0.0。 python:3.6。
    模型参考地址: https://aistudio.baidu.com/aistudio/projectdetail/2579580?forkThirdPart=1
  2. 报错信息:
    outputs, lens, decodes = model.predict(test_data=test_loader)
    #ValueError: underlying buffer has been detached
  3. 详细日志:
    [08/Nov/2021 11:09:12] "GET /predict?img_path=/PaddleOCR/data/ HTTP/1.1" 500 -
    Traceback (most recent call last):
    File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2091, incall
    return self.wsgi_app(environ, start_response)
    File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2076, in wsgi_app
    response = self.handle_exception(e)
    File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2073, in wsgi_app
    response = self.full_dispatch_request()
    File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1518, in full_dispatch_request
    rv = self.handle_user_exception(e)
    File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1516, in full_dispatch_request
    rv = self.dispatch_request()
    File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1502, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(req.view_args)
    File "/PaddleOCR/ocr_predict.py", line 363, in predict
    outputs, lens, decodes = model.predict(test_data=test_loader)
    File "/usr/local/lib/python3.6/dist-packages/paddle/hapi/model.py", line 1926, in predict
    cbks.on_begin('predict', logs)
    File "/usr/local/lib/python3.6/dist-packages/paddle/hapi/callbacks.py", line 103, in on_begin
    self._call(name, logs)
    File "/usr/local/lib/python3.6/dist-packages/paddle/hapi/callbacks.py", line 94, in _call
    func(*args)
    File "/usr/local/lib/python3.6/dist-packages/paddle/hapi/callbacks.py", line 484, in on_predict_begin
    num=self.test_steps, verbose=self.verbose)
    File "/usr/local/lib/python3.6/dist-packages/paddle/hapi/progressbar.py", line 53, in
    init**
    self.file.isatty()) or 'ipykernel' in sys.modules or
    ValueError: underlying buffer has been detached
    172.17.0.1 - - [08/Nov/2021 11:09:12] "GET /predict?debugger=yes&cmd=resource&f=style.css HTTP/1.1" 200 -
    172.17.0.1 - - [08/Nov/2021 11:09:12] "GET /predict?debugger=yes&cmd=resource&f=debugger.js HTTP/1.1" 200 -
    172.17.0.1 - - [08/Nov/2021 11:09:12] "GET /predict?debugger=yes&cmd=resource&f=console.png HTTP/1.1" 200 -
    172.17.0.1 - - [08/Nov/2021 11:09:12] "GET /predict?debugger=yes&cmd=resource&f=ubuntu.ttf HTTP/1.1" 200 -
  4. 目前检查结果:
    本地推理时没有错误,全过程是OCR+NER模型的结合,OCR部分部署flask没有错误。NER predict前数据类型和数据没错误。
    贴上部分代码:

(2) 初始化ner模型和相关词典和label数据

label_vocab = load_dict(tag_path)
word_vocab = load_dict(word_path)
network = BiGRUWithCRF2(300, 300, len(word_vocab), len(label_vocab))
print('词库,label,模型加载完成')


# (3) 加载ner模型参数

 layer_state_dict = paddle.load(ner_param)
 opt_state_dict   = paddle.load(ner_pdopt)
 print('ner模型参数加载完成')

 # (4) 将ner参数赋值到模型里
 network.set_state_dict(layer_state_dict)
 print('ner模型初始化参数完成')

 # (5) 初始化ner模型
 model       = paddle.Model(network)
 optimizer   = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())
 optimizer.set_state_dict(opt_state_dict)
 print('ner模型需要数据加载完成')

 crf_loss = LinearChainCrfLoss(network.crf)
 chunk_evaluator = ChunkEvaluator(label_list=label_vocab.keys(), suffix=True)
 model.prepare(optimizer, crf_loss, chunk_evaluator)
 print('ner模型初始化全部完成')

  test_ds = load_dataset(str_temp)
 print(0)
 print(test_ds[:2])
 # print(test_ds[:2])
 # print(test_ds)
 test_ds.map(convert_example)
 print(test_ds)
 print(1)
 batchify_fn = lambda samples,fn=Tuple(
               Pad(axis=0, pad_val=word_vocab.get('OOV')),  # token_ids
               Stack()                                      # seq_len
             ): fn(samples)

 test_loader = paddle.io.DataLoader(
 dataset     = test_ds,
 batch_size  = 1,
 drop_last   = True,
 return_list = True,
 collate_fn  = batchify_fn)

 print(test_loader)
 print(2)
 print(str_temp)

 ## 5. 数据加载到模生成坐标
 outputs, lens, decodes = model.predict(test_data=test_loader)

  具体可以看: https://aistudio.baidu.com/aistudio/projectdetail/2579580?forkThirdPart=1
rsl1atfo

rsl1atfo1#

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档常见问题历史IssueAI社区来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

ui7jx7zq

ui7jx7zq3#

今晚能给回复吗,我比较急

t9eec4r0

t9eec4r04#

请试一下升级到paddlenlp 2.1,2.0可能有bug。

vdgimpew

vdgimpew5#

paddlenlp如果升级到2.1,我原来训练的模型还需要重新训练吗?

ugmeyewa

ugmeyewa6#

建议再重新训练一下,谢谢

q5lcpyga

q5lcpyga7#

您好,paddlenlp更新到2.1.0,模型重新训练后,本地推理测试成功,部署到docker后又报了同样的错误,我现在怀疑是不是paddlenlp有什么关联的包需要安装,但官网除了Paddle外没写什么关联的包,因为每次部署到docker后才报错。

vngu2lb8

vngu2lb88#

有没有用flask部署paddlenlp中ner模型成功的案例呀

2uluyalo

2uluyalo9#

你好,请问你的操作系统环境是怎样的,以及如何启动flask服务的?

oyjwcjzk

oyjwcjzk10#

配置:
cpu推理。
mac系统,docker部署,python的flask服务部署。
paddle版本:2.1.1 。 paddlenlp版本: 2.1.0 python:3.6。
在mac本地部署到docker,然后启动flask,里面模型包括ocr和ner, ocr是直接用百度的模型加载使用,ner是根据gru+crf来写的,服务器推理是没问题都能跑通。就是启动docker后,在ner的predict出的错,具体错误在上面写了哈。

bvjxkvbb

bvjxkvbb11#

这个问题目前解决了,我还想问个问题,为什么我有时候用docker重新部署paddlenlp==2.1.0版本时,它会跑的特别慢,我用的是清华同源,有什么其他方法可以加速吗?

mf98qq94

mf98qq9412#

可能还需要明确一下复现的条件。

umuewwlo

umuewwlo13#

@MaDFolking 特别慢是指比单机部署的时候要慢很多吗?

相关问题