Paddle 模型inference时报错

cl25kdpy  于 2021-11-30  发布在  Java
关注(0)|答案(3)|浏览(797)

为使您的问题得到快速解决,在建立Issue前,请您先通过如下方式搜索是否有相似问题:【搜索issue关键字】【使用labels筛选】【官方文档】

如果您没有查询到相似问题,为快速解决您的提问,建立issue时请提供如下细节信息:

  • 标题:简洁、精准描述您的问题,例如“最新预测库的API文档在哪儿 ”
  • 版本、环境信息:

   1)PaddlePaddle版本:请提供您的PaddlePaddle版本号(如1.1)或CommitID
   2)CPU:预测若用CPU,请提供CPU型号,MKL/OpenBlas/MKLDNN/等数学库使用情况
   3)GPU:预测若用GPU,请提供GPU型号、CUDA和CUDNN版本号
   4)系统环境:请您描述系统类型、版本(如Mac OS 10.14),Python版本
注:您可以通过执行summary_env.py获取以上信息。
-预测信息
   1)C++预测:请您提供预测库安装包的版本信息,及其中的version.txt文件
   2)CMake包含路径的完整命令
   3)API信息(如调用请提供)
   4)预测库来源:官网下载/特殊环境(如BCLOUD编译)

  • 复现信息:如为报错,请给出复现环境、复现步骤
  • 问题描述:请详细描述您的问题,同步贴出报错信息、日志/代码关键片段

Thank you for contributing to PaddlePaddle.
Before submitting the issue, you could search issue in the github in case that th
If there is no solution,please make sure that this is an inference issue including the following details :

System information

-PaddlePaddle version (eg.1.1)or CommitID
-CPU: including CPUMKL/OpenBlas/MKLDNN version
-GPU: including CUDA/CUDNN version
-OS Platform (eg.Mac OS 10.14)
-Python version
-Cmake orders
-C++version.txt
-API information
Note: You can get most of the information by running summary_env.py.

To Reproduce

Steps to reproduce the behavior

Describe your current behavior
Code to reproduce the issue
Other info / logs

zaq34kh6

zaq34kh61#

1)PaddlePaddle=1.7.2,python==3.6.5

2)使用超轻量级中文ocr 模型,mobilenetV3 和crnn 网络
2)模型inference时,单次调用能正确输出结果,但多线程并发调用模型进行线上推理时,报如下错误。
Error Message Summary:

Error: Alloc 256257408 error!
[Hint: Expected posix_memalign(&p, alignment, size) == 0, but received posix_memalign(&p, alignment, size):12 != 0:0.] at (/paddle/paddle/fluid/memory/detail/system_allocator.cc:59)
[operator < conv2d > error]
[2020-08-04 19:21:03 +0800] [62103] [WARNING] ######## 图片下载耗时: 0.13369202613830566 秒 ########
[2020-08-04 19:21:04 +0800] [62103] [ERROR] Exception on /api/v1.0/predict [POST]
Traceback (most recent call last):
File "/opt/apps/ocr_venv/lib/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/opt/apps/ocr_venv/lib/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/apps/ocr_venv/lib/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/opt/apps/ocr_venv/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/opt/apps/ocr_venv/lib/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/apps/ocr_venv/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functionsrule.endpoint
File "/opt/apps/risk-model-ocr/app.py", line 103, in predict
dt_boxes, rec_res = text_sys(image)
File "/opt/apps/risk-model-ocr/tools/predict_system.py", line 72, incall
dt_boxes, elapse = self.text_detector(img)
File "/opt/apps/risk-model-ocr/tools/predict_det.py", line 109, incall
self.predictor.zero_copy_run()
paddle.fluid.core_avx.EnforceNotMet:

C++ Call Stacks (More useful to developers):

0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int)
1 paddle::memory::detail::AlignedMalloc(unsigned long)
2 paddle::memory::detail::CPUAllocator::Alloc(unsigned long*, unsigned long)
3 paddle::memory::detail::BuddyAllocator::RefillPool(unsigned long)
4 paddle::memory::detail::BuddyAllocator::Alloc(unsigned long)
5 void* paddle::memory::legacy::Allocpaddle::platform::CPUPlace(paddle::platform::CPUPlace const&, unsigned long)
6 paddle::memory::allocation::NaiveBestFitAllocator::AllocateImpl(unsigned long)
7 paddle::memory::allocation::AllocatorFacade::Alloc(paddle::platform::Place const&, unsigned long)
8 paddle::memory::Alloc(paddle::platform::Place const&, unsigned long)
9 paddle::memory::Alloc(paddle::platform::DeviceContext const&, unsigned long)
10 paddle::framework::Tensor paddle::framework::ExecutionContext::AllocateTmpTensor<float, paddle::platform::CPUDeviceContext>(paddle::framework::DDim const&, paddle::platform::CPUDeviceContext const&) const
11 paddle::operators::GemmConvKernel<paddle::platform::CPUDeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const
12 std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CPUPlace, false, 0ul, paddle::operators::GemmConvKernel<paddle::platform::CPUDeviceContext, float>, paddle::operators::GemmConvKernel<paddle::platform::CPUDeviceContext, double> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&)
13 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const
14 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const
15 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&)
16 paddle::framework::NaiveExecutor::Run()
17 paddle::AnalysisPredictor::ZeroCopyRun()
18 gevent_callback_io
19 ev_invoke_pending
20 ev_run

3j86kqsm

3j86kqsm2#

你好,感谢你的反馈,看着像是cpu内存分配问题,方便提供下最小的可复现的环境吗?我们在这边排查下~

vcudknz3

vcudknz33#

也�可以拉取下最新代码再测试下

相关问题