Hi,
I have been informed that our oneDNN LSTM kernel is not being used in ch_ppocr_mobile_v1.1_rec inference model. After investigation I have found that fc_lstm_fuse_pass was temporarily disabled in #27377) So I have a question, when can we expect that pass to be enabled?
Best Regards,
Jakub
3条答案
按热度按时间6rvt4ljy1#
您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档、常见问题、历史Issue、AI社区来寻求解答。祝您生活愉快~
Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the API,FAQ,Github Issue and AI community to get the answer.Have a nice day!
ssm49v7z2#
hi, fc_lstm_fuse_pass has accuracy problems under cpu. If the calculation result of fusion_lstm under oneDNNL is correct, you can add fc_lstm_fuse_pass to the pass opened by mkldnn
https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/inference/api/paddle_pass_builder.cc#L221
I will try to fix the accuracy of fusion_lstm under cpu, but it may take some time
nnsrf1az3#
@jakpiase@lidanqing-intel
We find the next unit test will fail when the fc_lstm_fuse_pass is enabled. So, once the problem of unit test is fixed, we can enbale the pass.