Id was not generated till the end of blob in .id.h file using ncnn2mem

uqdfh47h  于 2022-10-22  发布在  其他
关注(0)|答案(2)|浏览(127)

I used onnx2ncnn to make .bin and .param file.
Then, I used "ncnn2mem" to generate .id.h file.
I expected to generate all the output blob for my model.
but I found that "474" blob is the last blob in .id.h file.
it did not make the id till the end of the blobs.
What can make this problem?
I need to extract BLOB_466 and BLOB_479.

I used the command as the below.
$ ./ncnn2mem ./onnx/mymodel.param ./onnx/mymodel.bin mymodel.id.h mymodel.men.h

"mymodel.id.h"

...
const int BLOB_466 = 247;
const int LAYER_468 = 217;
const int BLOB_468 = 248;
const int LAYER_470 = 218;
const int BLOB_470 = 249;
const int LAYER_472 = 219;
const int BLOB_472 = 250;
const int LAYER_474 = 220;
const int BLOB_474 = 251;
} // namespace fd_03_param_id
#endif // NCNN_INCLUDE_GUARD_mymodel_id_h

log of exporting "mymodel.onnx"
%454 : Tensor = onnx::Constantvalue= 1 -1 4 [ CPULongType{3} ]
%455 : Float(1, 3136, 4) = onnx::Reshape(%300, %454) # x.py:148:0
%456 : Tensor = onnx::Constantvalue= 1 -1 4 [ CPULongType{3} ]
%457 : Float(1, 784, 4) = onnx::Reshape(%346, %456) # x.py:148:0
%458 : Tensor = onnx::Constantvalue= 1 -1 4 [ CPULongType{3} ]
%459 : Float(1, 196, 4) = onnx::Reshape(%406, %458) # x.py:148:0
%460 : Tensor = onnx::Constantvalue= 1 -1 4 [ CPULongType{3} ]
%461 : Float(1, 49, 4) = onnx::Reshape(%437, %460) # x.py:148:0
%462 : Tensor = onnx::Constantvalue= 1 -1 4 [ CPULongType{3} ]
%463 : Float(1, 16, 4) = onnx::Reshape(%445, %462) # x.py:148:0
%464 : Tensor = onnx::Constantvalue= 1 -1 4 [ CPULongType{3} ]
%465 : Float(1, 4, 4) = onnx::Reshape(%453, %464) # x.py:148:0
%466 : Float(1, 4185, 4) = onnx::Concat[axis=1](%455, %457, %459, %461, %463, %465) # x.py:148:0
%467 : Tensor = onnx::Constantvalue= 1 -1 2 [ CPULongType{3} ]
%468 : Float(1, 3136, 2) = onnx::Reshape(%299, %467) # x.py:149:0
%469 : Tensor = onnx::Constantvalue= 1 -1 2 [ CPULongType{3} ]
%470 : Float(1, 784, 2) = onnx::Reshape(%345, %469) # x.py:149:0
%471 : Tensor = onnx::Constantvalue= 1 -1 2 [ CPULongType{3} ]
%472 : Float(1, 196, 2) = onnx::Reshape(%405, %471) # x.py:149:0
%473 : Tensor = onnx::Constantvalue= 1 -1 2 [ CPULongType{3} ]
%474 : Float(1, 49, 2) = onnx::Reshape(%436, %473) # x.py:149:0
%475 : Tensor = onnx::Constantvalue= 1 -1 2 [ CPULongType{3} ]
%476 : Float(1, 16, 2) = onnx::Reshape(%444, %475) # x.py:149:0
%477 : Tensor = onnx::Constantvalue= 1 -1 2 [ CPULongType{3} ]
%478 : Float(1, 4, 2) = onnx::Reshape(%452, %477) # x.py:149:0
%479 : Float(1, 4185, 2) = onnx::Concat[axis=1](%468, %470, %472, %474, %476, %478) # x.py:149:0
return (%466, %479)

ncnn version: 20200727

vi4fp9gy

vi4fp9gy1#

I write a simple example that does not generate output id.
Is this a bug?
I wrote the code in pytorch and export it to ONNX and I used onnx2ncnn.

python code

import torch
import torch.nn as nn

from contextlib import redirect_stdout

class CatModel(nn.Module):
    def __init__(self):
        super(CatModel, self).__init__()

    def forward(self, x):
        x_r = x.view(1, -1, 224).contiguous()
        x_r_e = x_r * 0.5
        out = torch.cat([x_r, x_r_e], 1)
        return out

if __name__ == "__main__":
    input_sz_h = input_sz_w = 224
    onnx_file = "cat.onnx"
    net = CatModel()
    dummy = torch.randn(1, 3, input_sz_h, input_sz_w)
    onnx_log = "log.cat_onnx"
    with open(onnx_log, 'w') as f, redirect_stdout(f):
        torch.onnx.export(net, dummy, onnx_file, verbose=True,
                          input_names=["input"], output_names=["output"],
                          do_constant_folding=True)

onnx log

graph(%input : Float(1, 3, 224, 224)):
  %1 : Tensor = onnx::Constant[value=   1   -1  224 [ CPULongType{3} ]]()
  %2 : Float(1, 672, 224) = onnx::Reshape(%input, %1) # cat_onnx.py:13:0
  %3 : Float() = onnx::Constant[value={0.5}]()
  %4 : Float(1, 672, 224) = onnx::Mul(%2, %3)
  %output : Float(1, 1344, 224) = onnx::Concat[axis=1](%2, %4) # cat_onnx.py:15:0
  return (%output)

cat.param file

cat.id.h file

8wtpewkr

8wtpewkr2#

There is a duplicated MemoryData layer in your ncnn.param file

MemoryData       3                        0 1 3 0=1

follow the guide here
https://github.com/Tencent/ncnn/wiki/use-ncnn-with-pytorch-or-onnx

相关问题