ncnn Error: find_blob_index_by_name input.1 failed even properly set input/output names.

iyr7buue  于 4个月前  发布在  Perl
关注(0)|答案(4)|浏览(172)

Hi,

I convert onnx models to the ncnn but it gives

find_blob_index_by_name input.1 failed

even I changed the input / output names porerly.

models at :

https://drive.google.com/drive/folders/1a5S_3mS_zm6vJa0-chyOF1kuUJD9KZJU?usp=sharing

or ncnn doesnt support these type of onnx models ?

Best

nwsw7zdq

nwsw7zdq1#

the onnx2ncnn tool sometimes doesnt work correctly (obviously due to mistake in compilation from our side) consider using this website convertmodel.com from @daquexian it saved my day , check if net.load_param or net.load_bin returns -1 . If it returns -1 then your ncnn model is corrupted

7jmck4yq

7jmck4yq2#

Is there any accuracy loss in ncnn format when converting from onyx and the speed difference ? Have you experienced ?…

On 24 Dec 2021, at 08:59, TheSeriousProgrammer ***@***.***> wrote: the onnx2ncnn tool sometimes doesnt work correctly (obviously due to mistake in compilation from our side) consider using this website convertmodel.com from @daquexian < https://github.com/daquexian > it saved my day , check if net.load_param or net.load_bin returns -1 . If it returns -1 then your ncnn model is corrupted — Reply to this email directly, view it on GitHub <#3308 (comment)>, or unsubscribe < https://github.com/notifications/unsubscribe-auth/AEFRZHYM5EMXCT46YOWHGSDUSQD4HANCNFSM5G2BCH4Q >. Triage notifications on the go with GitHub Mobile for iOS < https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 > or Android < https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub >. You are receiving this because you authored the thread.

vs3odd8k

vs3odd8k3#

forgot to mention , not all models will have the input layer named "input.1" , in my case with picodet it was named "image" , checking the model in netron might help with identifying input and output layers

I have not benchmarked much , recently I was trying to convert my paddle model to onnx to ncnn , I ran the following in an x86_64 computer

Raw Paddle library inference
class_id:0, confidence:0.8428, left_top:[273.07,250.13],right_bottom:[488.80,448.23]

paddle to onnx to ncnn converted model inference in ncnn python
class_id:0, confidence:0.8428, left_top:[273.07,250.13],right_bottom:[488.80,448.23]

you can see that there is no change in the output of the model after convertion, however if you performed 16 bit or 8 bit quantization during the convertion process , you can expect slightly different outputs

interms of speed
raw paddle inference single core
preprocess_time(ms): 32.40, inference_time(ms): 216.30, postprocess_time(ms): 5.70

ncnn inference single core
preprocess_time(ms): 31.20, inference_time(ms): 157.6, postprocess_time(ms): 5.60

roughly 27% speed improvement, ofcourse better improvements must be visible in armv8 will update when I test in raspberry pi 4

wvyml7n5

wvyml7n54#

if you also add onnx benchmark it shows all . Looks ncnn better for speed and the memory consumptions.

相关问题