Paddle 编译预测库出错,使用官方版本预测库时开启tensorRt出错

clj7thdc  于 2022-11-19  发布在  其他
关注(0)|答案(3)|浏览(246)
  • 版本、环境信息:
       1)PaddlePaddle版本:2.1
       2)CPU:8700K
       3)GPU:3060 CUDA 11.0 CUDNN 81077 tensorRt 7234
       4)系统环境:win10,Python 3.8.8 cmake 3.20.1 vs2019
    -预测信息
       1)C++预测:请您提供预测库安装包的版本信息,及其中的version.txt文件
    GIT COMMIT ID: 1e62c23
    WITH_MKL: ON
    WITH_MKLDNN: ON
    WITH_GPU: ON
    WITH_ROCM: OFF
    CUDA version: 11.0
    CUDNN version: v8.0
    CXX compiler version: 19.16.27045.0
    WITH_TENSORRT: ON
    TensorRT version: v7

 4)预测库来源:官网下载/

使用PPOCR开启编译C++ GPU环境后使用报错

出错如下:

自编译Paddle2.1预测库版本错误日志:
Found Paddle host system: win32, version:
Found Paddle host system's CPU: 12 cores
Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.17763.
The CXX compiler identification is MSVC 19.29.30136.0
The C compiler identification is MSVC 19.29.30136.0
Detecting CXX compiler ABI info
Detecting CXX compiler ABI info - done
Check for working CXX compiler: D:/VS2019/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
Detecting CXX compile features
Detecting CXX compile features - done
Detecting C compiler ABI info
Detecting C compiler ABI info - done
Check for working C compiler: D:/VS2019/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe - skipped
Detecting C compile features
Detecting C compile features - done
Looking for pthread.h
Looking for pthread.h - not found
Found Threads: TRUE
The CUDA compiler identification is NVIDIA 11.0.194
Detecting CUDA compiler ABI info
Detecting CUDA compiler ABI info - done
Check for working CUDA compiler: C:/PPOCR/CUDA/V110/bin/nvcc.exe - skipped
Detecting CUDA compile features
Detecting CUDA compile features - done
CUDA compiler: C:/PPOCR/CUDA/V110/bin/nvcc.exe, version: NVIDIA 11.0.194
CXX compiler: D:/VS2019/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe, version: MSVC 19.29.30136.0
C compiler: D:/VS2019/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/cl.exe, version: MSVC 19.29.30136.0
AR tools: D:/VS2019/VC/Tools/MSVC/14.29.30133/bin/Hostx64/x64/lib.exe
Use static C runtime time, refer to https://docs.microsoft.com/en-us/cpp/c-runtime-library/crt-library-features?view=vs-2019
Found Git: D:/MyWork/Git/cmd/git.exe (found version "2.20.1.windows.1")
Performing Test MMX_FOUND
Performing Test MMX_FOUND - Failed
Performing Test SSE2_FOUND
Performing Test SSE2_FOUND - Success
Performing Test SSE3_FOUND
Performing Test SSE3_FOUND - Success
Performing Test AVX_FOUND
Performing Test AVX_FOUND - Success
Performing Test AVX2_FOUND
Performing Test AVX2_FOUND - Success
Performing Test AVX512F_FOUND
Performing Test AVX512F_FOUND - Failed
CMake Warning at CMakeLists.txt:237 (MESSAGE):
Disable NCCL when compiling for Windows. Force WITH_NCCL=OFF.

CMake Warning at CMakeLists.txt:263 (MESSAGE):
If the environment is multi-card, the WITH_NCCL option needs to be turned
on, otherwise only a single card can be used.

CUDA detected: 11.0.194
WARNING: This is just a warning for publishing release.
You are building GPU version without supporting different architectures.
So the wheel package may fail on other GPU architectures.
You can add -DCUDA_ARCH_NAME=All in cmake command
to get a full wheel package to resolve this warning.
While, this version will still work on local GPU architecture.
Automatic GPU detection failed. Building for all known architectures.
NVCC_FLAGS_EXTRA: -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80
Current cuDNN header is C:/PPOCR/CUDA/V110/include/cudnn_version.h Current cuDNN version is v8.1.0.
CMake Warning at CMakeLists.txt:292 (MESSAGE):
Disable RCCL when compiling without ROCM. Force WITH_RCCL=OFF.

Disable XBYAK in Windows and MacOS
Looking for C++ include shlwapi.h
Looking for C++ include shlwapi.h - found
BOOST_VERSION: 1.41.0, BOOST_URL: http://paddlepaddledeps.bj.bcebos.com/boost_1_41_0.tar.gz
warp-ctc library: E:/Paddle/build/third_party/install/warpctc/bin/warpctc.dll
MKLML_VER: mklml_win_2019.0.5.20190502, MKLML_URL: https://paddlepaddledeps.bj.bcebos.com/mklml_win_2019.0.5.20190502.zip
Found cblas and lapack in MKLML (include: E:/Paddle/build/third_party/install/mklml/include, library: E:/Paddle/build/third_party/install/mklml/lib/mklml.lib)
Set E:/Paddle/build/third_party/install/mkldnn/lib to runtime path
MKLDNN library: E:/Paddle/build/third_party/install/mkldnn/bin/mkldnn.lib
Protobuf protoc executable: E:/Paddle/build/third_party/install/protobuf/bin/protoc.exe
Protobuf-lite library: E:/Paddle/build/third_party/install/protobuf/lib/libprotobuf-lite.lib
Protobuf library: E:/Paddle/build/third_party/install/protobuf/lib/libprotobuf.lib
Protoc library: E:/Paddle/build/third_party/install/protobuf/lib/libprotoc.lib
Protobuf version: 3.1.0
Found PythonInterp: C:/Anaconda3/python.exe (found suitable version "3.8.8", minimum required is "3.6")
Found PythonLibs: C:/Anaconda3/libs/python38.lib (found suitable version "3.8.8", minimum required is "3.6")
Found PY_pip: C:\Anaconda3\lib\site-packages\pip_init.py
Found PY_numpy: C:\Anaconda3\lib\site-packages\numpy
init.py
Found PY_wheel: C:\Anaconda3\lib\site-packages\wheel
init.py
Found PY_google.protobuf: C:\Anaconda3\lib\site-packages\google\protobuf
init_.py
CMake Deprecation Warning at cmake/FindNumPy.cmake:6 (cmake_minimum_required):
Compatibility with CMake < 2.8.12 will be removed from a future version of
CMake.

Update the VERSION argument value or use a ... suffix to tell
CMake that the project does not need compatibility with older versions.
Call Stack (most recent call first):
cmake/external/python.cmake:72 (FIND_PACKAGE)
cmake/third_party.cmake:230 (include)
CMakeLists.txt:307 (include)

Found NumPy: C:/Anaconda3/Lib/site-packages/numpy/core/include
Download dependence[cudaerror] from http://paddlepaddledeps.bj.bcebos.com/cudaErrorMessage.tar.gz
Looking for UINT64_MAX
Looking for UINT64_MAX - found
Looking for sys/types.h
Looking for sys/types.h - found
Looking for stdint.h
Looking for stdint.h - found
Looking for stddef.h
Looking for stddef.h - found
Check size of pthread_spinlock_t
Check size of pthread_spinlock_t - failed
Check size of pthread_barrier_t
Check size of pthread_barrier_t - failed
Paddle version is 2.1.3
Found CUDA: C:/PPOCR/CUDA/V110 (found version "11.0")
Cannot find CUPTI, GPU Profiling is incorrect.
Enable Intel OpenMP with E:/Paddle/build/third_party/install/mklml/lib/libiomp5md.lib
CMake Warning at CMakeLists.txt:371 (message):
On inference mode, will take place some specific optimization. Turn on the
ON_INFER flag when building inference_lib only.

commit: 06d47ff
branch: HEAD
WITH_DLNNE:
MESSAGE: This is just a message for publishing release.
You are building AVX version without NOAVX core.
So the wheel package may fail on NOAVX machine.
You can add -DNOAVX_CORE_FILE=/path/to/your/core_noavx.* in cmake command
to get a full wheel package to resolve this warning.
While, this version will still work on local machine.
Configuring done

hiz5n14c

hiz5n14c1#

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看 官网API文档常见问题历史IssueAI社区 来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

goucqfw6

goucqfw62#

TensorRT的动态shape没打开,参考https://paddleinference.paddlepaddle.org.cn/api_reference/python_api_doc/Config/GPUConfig.html#tensorrt。或者采用ocr官方案例:https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.3/deploy/cpp_infer。

e37o9pze

e37o9pze3#

TensorRT的动态形状没打开,参考https://paddleinference.paddlepaddle.org.cn/api_reference/python_api_doc/Config/GPUConfig.html#tensorrt。或者采用ocr官方案例:https://github.com/PaddlePaddle/PaddleOCR/ tree/release/2.3/deploy/cpp_infer。
谢谢,我试试!
动态形状有开启,我最后在ocr_det.cpp 中开启 修改一下代码if (this->use_tensorrt_) config.SwitchIrOptim(false);开闭这个加速才可以正常运行

相关问题