onnx2ncnn conversion output is giving error when inferencing.

gpfsuwkq  于 4个月前  发布在  其他
关注(0)|答案(9)|浏览(57)

Error solved: add tf.keras.layer.Permute(3,1,2) to convert input from NHWC to NCHW. This solves segmentation fault. Because onnx2ncnn is for conversion from pytorch which uses NCHW.

I couldn't solve integer output problem yet. Output is correct but not probabilities, but integers.

error log | 日志或报错信息 | ログ

context | 编译/运行环境 | バックグラウンド

I trained a convolutional model with mnist, saved it as onnx, simplified it,
onnx2ncnn gave no error, but when it comes to inference it with c_api.h, it says "segmentation fault".

It is not happening with master branch, it is happening with tags 20211208, 20211122 and 20201218. I was able to try them,

When I try with master branch result is true but integer, not probabilities of outputs. I use softmax activation at the end of network.

how to reproduce | 复现步骤 | 再現方法

  1. Ubuntu 20.04 Virtualbox
  2. Python 3.7, tensorflow 2.7, tf2onnx, op12
  3. ncnn 20211208

more | 其他 | その他

5lhxktic

5lhxktic1#

I dont know segmentation fault bug. I need to see how you translate it. But i can help you output bugs. You can show ncnn output data with pretty_print function or you should pay attention ncnn2opencv data type convertion. If you are using wrong convert algorithm you will get UINT8 data.
you can use float data conversation algorithm. You should use this function ncnn::Mat 1 channel -> cv::Mat CV_32FC1

bz4sfanl

bz4sfanl2#

#include <stdio.h>
#include <stdlib.h>

#include "lib/image_lib.h"
#include "lib/model_lib.h"

int main()
{
  /*
Variables  
*/
  unsigned int i;
  unsigned int error;
  
  ncnn_net_t model_instance;
  ncnn_extractor_t ex;

  ncnn_mat_t image_in; 
  ncnn_mat_t out;
  unsigned int out_w;
  float* out_data;
  
  /*
Read Image
*/
  image_raw image;
  error=imagelib_load_raw("data/image.png", &image);
  if(error) 
  {
    return error;
  }
  
  image_in = modellib_convertimage(&image);
  imagelib_free_raw(&image);
  
  /*
Load NCNN model
*/

  model_instance = modellib_load("model/model_output");
  ex = ncnn_extractor_create(model_instance);

  /*
Run NCNN model
*/
  ncnn_extractor_input(ex, "input_layer_input", image_in);
  printf("%d\n",1);
  ncnn_extractor_extract(ex, "output_layer", &out);
printf("%d\n",1);
  out_w = ncnn_mat_get_w(out);
  out_data = (float*)ncnn_mat_get_data(out);

  for (i = 0; i < out_w; i++)
  {
    printf("%f \n",out_data[i]);
  }

  /*
Destroy residue
*/
  ncnn_mat_destroy(image_in);
  ncnn_mat_destroy(out);
  ncnn_extractor_destroy(ex);
  ncnn_net_destroy(model_instance);

  return 0;
}

some of libraries are only wrapping functions like

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include "model_lib.h"

/*
ncnn_net_t modellib_load(char* model_path):
Load model from model_path, and return ncnn_net_t
*/

ncnn_net_t modellib_load(char* model_path)
{
  /*
Variables  
*/
  char* param_file;
  char* model_file;
  unsigned int N;
  
  ncnn_net_t model_instance;
  ncnn_option_t opt;
  
  /*
Model and Parameter File Paths
*/
  N=strlen(model_path)*sizeof(char);
  
  param_file=(char*)malloc(N+6+1);
  strcpy(param_file,model_path);
  strcat(param_file,".param");
  
  model_file=(char*)malloc(N+4+1);
  strcpy(model_file,model_path);
  strcat(model_file,".bin");
  
  /*
Load Model
*/
  model_instance = ncnn_net_create();

  opt = ncnn_option_create();
  ncnn_option_set_use_vulkan_compute(opt, 0);
  ncnn_net_set_option(model_instance, opt);

  ncnn_net_load_param(model_instance, param_file);
  ncnn_net_load_model(model_instance, model_file);
  
  /*
Destroy Residue
*/
  ncnn_option_destroy(opt);
  free(param_file);
  free(model_file);
  
  return model_instance;
}

/*
ncnn_mat_t modellib_convertimage(image_raw* image_in):
Convert image from image_raw to ncnn_mat_t
*/

ncnn_mat_t modellib_convertimage(image_raw* image_in)
{
  ncnn_mat_t image_out = ncnn_mat_from_pixels(image_in->image, NCNN_MAT_PIXEL_RGB, image_in->width, image_in->height, image_in->width*3);

  const float mean_vals[3] = {0.f, 0.f, 0.f};
  const float norm_vals[3] = {256.f, 256.f, 256.f};
  ncnn_mat_substract_mean_normalize(image_out, mean_vals, norm_vals);
  
  return image_out;
}

image read etc. working properly, because when I try it with master branch, it works. I am using lodepng to read image.

az31mfrm

az31mfrm3#

what is ncnn_mat_get_data function ? How to convert out data?

kzipqqlq

kzipqqlq5#

Can you print out data with pretty_print function after ncnn_extractor_extract(ex, "output_layer", &out) line
?

bxjv4tth

bxjv4tth6#

I'm using c, not cpp, this function contains properties of an object

lvjbypge

lvjbypge7#

then I can suggest you to follow the ncnn wiki with c++.

5cg8jx4n

5cg8jx4n8#

Error solved for "segmentation fault": add tf.keras.layer.Permute(3,1,2) to convert input from NHWC to NCHW. This solves segmentation fault. Because onnx2ncnn is for conversion from pytorch which uses NCHW.

jfewjypa

jfewjypa9#

How do I do this correct conversion to Mat in python?

相关问题