关于Paddle.inference相关api问题

jgwigjjp  于 2个月前  发布在  其他
关注(0)|答案(8)|浏览(30)

请提出你的问题 Please ask your question

关于该 paddle.inference.Predictor.get_input_names()接口,返回的是input_tensor的name,请问具体细节如何得知?
我在使用pphumanv2的时候想把目标检测模型替换为yolov8(该模型为yolov8官方模型export转换为paddle格式),在get_input_names()和原生的pphumanv2提供的model得到的name不一致导致模型后续无法进行,想知道这里如何修改?
以下是调用yolov8模型报错
Traceback (most recent call last): File "deploy/pipeline/pipeline.py", line 1108, in <module> main() File "deploy/pipeline/pipeline.py", line 1095, in main pipeline.run_multithreads() File "deploy/pipeline/pipeline.py", line 172, in run_multithreads self.predictor.run(self.input) File "deploy/pipeline/pipeline.py", line 490, in run self.predict_video(input, thread_idx=thread_idx) File "deploy/pipeline/pipeline.py", line 674, in predict_video reuse_det_result=reuse_det_result) File "/workdir/Paddle/PaddleDetection-2.5_test/deploy/pptracking/python/mot_sde_infer.py", line 478, in predict_image inputs = self.preprocess(batch_image_list) File "/workdir/Paddle/PaddleDetection-2.5_test/deploy/pptracking/python/det_infer.py", line 144, in preprocess input_tensor.copy_from_cpu(inputs[input_names[i]]) KeyError: 'x0'

ibrsph3r

ibrsph3r1#

模型后续无法进行是什么,具体哪里有问题?

8nuwlpux

8nuwlpux2#

模型后续无法进行是什么,具体哪里有问题?

已经解决,得通过paddle的工具转推理模型,yolov8官方转出来的输入输出不太对

5anewei6

5anewei63#

目前在pphumanv2中调用yolov8模型,已经成功,但是使用run_mode=trt_fp16时会出现错误:
`E0605 03:15:22.077699 48 helper.h:111] elementwise (Output: tmp_711311): elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8400,2] and [2,1]).
E0605 03:15:22.077817 48 helper.h:111] elementwise (Output: tmp_711311): elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8400,2] and [2,1]).
E0605 03:15:22.077869 48 helper.h:111] Parameter check failed at: ../builder/Layers.h::setAxis::381, condition: axis >= 0 && axis < Dims::MAX_DIMS
E0605 03:15:22.078172 48 helper.h:111] elementwise (Output: tmp_711311): elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8400,2] and [2,1]).
E0605 03:15:22.078222 48 helper.h:111] elementwise (Output: tmp_711311): elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8400,2] and [2,1]).
E0605 03:15:22.078528 48 helper.h:111] elementwise (Output: tmp_711311): elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8400,2] and [2,1]).
E0605 03:15:22.078585 48 helper.h:111] elementwise (Output: tmp_711311): elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8400,2] and [2,1]).
E0605 03:15:22.079020 48 helper.h:111] elementwise (Output: tmp_711311): elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8400,2] and [2,1]).
E0605 03:15:22.079092 48 helper.h:111] Could not compute dimensions for tmp_711311, because the network is not valid.
E0605 03:15:22.079151 48 helper.h:111] Network validation failed.
Traceback (most recent call last):
File "deploy/pipeline/pipeline.py", line 1108, in
main()
File "deploy/pipeline/pipeline.py", line 1093, in main
pipeline = Pipeline(FLAGS, cfg)
File "deploy/pipeline/pipeline.py", line 89, in init
self.predictor = PipePredictor(args, cfg, self.is_video)
File "deploy/pipeline/pipeline.py", line 467, in init
region_polygon=self.region_polygon)
File "/workdir/Paddle/PaddleDetection-2.5_test/deploy/pptracking/python/mot_sde_infer.py", line 119, in init
threshold=threshold, )
File "/workdir/Paddle/PaddleDetection-2.5_test/deploy/pptracking/python/det_infer.py", line 113, in init
enable_mkldnn=enable_mkldnn)
File "/workdir/Paddle/PaddleDetection-2.5_test/deploy/pptracking/python/det_infer.py", line 486, in load_predictor
predictor = create_predictor(config)
SystemError:

C++ Traceback (most recent call last):

0 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&)
1 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
2 paddle::AnalysisPredictor::Init(std::shared_ptrpaddle::framework::Scope const&, std::shared_ptrpaddle::framework::ProgramDesc const&)
3 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptrpaddle::framework::ProgramDesc const&)
4 paddle::AnalysisPredictor::OptimizeInferenceProgram()
5 paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
6 paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
7 paddle::inference::analysis::IRPassManager::Apply(std::unique_ptr<paddle::framework::ir::Graph, std::default_deletepaddle::framework::ir::Graph >)
8 paddle::framework::ir::Pass::Apply(paddle::framework::ir::Graph*) const
9 paddle::inference::analysis::TensorRtSubgraphPass::ApplyImpl(paddle::framework::ir::Graph*) const
10 paddle::inference::analysis::TensorRtSubgraphPass::CreateTensorRTOp(paddle::framework::ir::Node*, paddle::framework::ir::Graph*, std::vector<std::string, std::allocator<std::string > > const&, std::vector<std::string, std::allocator<std::string > >) const
11 paddle::inference::tensorrt::OpConverter::ConvertBlockToTRTEngine(paddle::framework::BlockDesc
, paddle::framework::Scope const&, std::vector<std::string, std::allocator<std::string > > const&, std::unordered_set<std::string, std::hash<std::string >, std::equal_to<std::string >, std::allocator<std::string > > const&, std::vector<std::string, std::allocator<std::string > > const&, paddle::inference::tensorrt::TensorRTEngine*)
12 paddle::inference::tensorrt::TensorRTEngine::FreezeNetwork()
13 phi::enforce::EnforceNotMet::EnforceNotMet(phi::ErrorSummary const&, char const*, int)
14 phi::enforce::GetCurrentTraceBackString abi:cxx11

Error Message Summary:

FatalError: Build TensorRT cuda engine failed! Please recheck you configurations related to paddle-TensorRT.
[Hint: infer_engine_ should not be null.] (at /home/paddle/data/xly/workspace/23278/Paddle/paddle/fluid/inference/tensorrt/engine.cc:296)`

用paddle推理没有问题,用paddle+trt就会出现size错误,请问一下这是某种算子不支持导致的吗?还是说可以解决

jum4pzuy

jum4pzuy4#

可以看下shape,固定shape还是动态shape

vmpqdwk3

vmpqdwk35#

可以看下shape,固定shape还是动态shape

代码层面应该没有开启动态shape,用netron看了ppyoloe的输入输出shape和yolov8的也没什么区别

inb24sb2

inb24sb26#

可以看下shape,固定shape还是动态shape

想问一下paddle+trtfp16模式为什么 模型初始化那里要五六分钟才能初始化完成

pkln4tw6

pkln4tw67#

可以看下shape,固定shape还是动态shape

想问一下paddle+trtfp16模式为什么 模型初始化那里要五六分钟才能初始化完成

我也很困扰,编译时间太长了,你知道有什么办法把编译的模型保存下来,下次直接load么?

5anewei6

5anewei68#

可以看下shape,固定shape还是动态shape

想问一下paddle+trtfp16模式为什么 模型初始化那里要五六分钟才能初始化完成

我也很困扰,编译时间太长了,你知道有什么办法把编译的模型保存下来,下次直接load么?

放弃了,我把全部换成trt的模型来用了

相关问题