eval model:: 3% 10/300 [00:08<04:12, 1.15it/s]
--------------------------------------
C++ Traceback (most recent call last):
--------------------------------------
0 paddle::imperative::Tracer::TraceOp(std::string const&, paddle::imperative::NameVarBaseMap const&, paddle::imperative::NameVarBaseMap const&, paddle::framework::AttributeMap, std::map<std::string, std::string, std::less<std::string >, std::allocator<std::pair<std::string const, std::string > > > const&)
1 paddle::imperative::Tracer::TraceOp(std::string const&, paddle::imperative::NameVarBaseMap const&, paddle::imperative::NameVarBaseMap const&, paddle::framework::AttributeMap, paddle::platform::Place const&, bool, std::map<std::string, std::string, std::less<std::string >, std::allocator<std::pair<std::string const, std::string > > > const&)
2 paddle::imperative::PreparedOp::Run(paddle::imperative::NameVarBaseMap const&, paddle::imperative::NameVarBaseMap const&, paddle::framework::AttributeMap const&, paddle::framework::AttributeMap const&)
3 std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CUDAPlace, false, 0ul, paddle::operators::CUDNNConvOpKernel<float>, paddle::operators::CUDNNConvOpKernel<double>, paddle::operators::CUDNNConvOpKernel<paddle::platform::float16> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&)
4 paddle::operators::CUDNNConvOpKernel<float>::Compute(paddle::framework::ExecutionContext const&) const
5 paddle::framework::Tensor::mutable_data(paddle::platform::Place const&, paddle::framework::proto::VarType_Type, unsigned long)
6 paddle::memory::AllocShared(paddle::platform::Place const&, unsigned long)
7 paddle::memory::allocation::AllocatorFacade::AllocShared(paddle::platform::Place const&, unsigned long)
8 paddle::memory::allocation::AllocatorFacade::Alloc(paddle::platform::Place const&, unsigned long)
9 paddle::memory::allocation::RetryAllocator::AllocateImpl(unsigned long)
10 paddle::memory::allocation::AutoGrowthBestFitAllocator::FreeIdleChunks()
----------------------
Error Message Summary:
----------------------
FatalError: `Segmentation fault` is detected by the operating system.
[TimeInfo:***Aborted at 1636257571 (unix time) try "date -d @1636257571" if you are using GNU date***]
[SignalInfo:***SIGSEGV (@0x28) received by PID 960 (TID 0x7f26d386d780) from PID 40***]
I don't know where the problem is, and I searched a lot of solutions above, but they couldn't solve it. Can you help me take a look?
26条答案
按热度按时间k4aesqcs16#
So, how do you get this log. I need the full log, but you just paste the tail of log file.
pkbketx917#
@GuoxiaWang
I'm so sorry but I pasted this code on Google Colab but it didn't display anything.
Thank you so much.
rbpvctlc18#
@dang-nh194423
Yes, but please open GLOG_v and C++ call stack.
ugmeyewa19#
@GuoxiaWang
Is this the log?
eval.log
r7knjye220#
@dang-nh194423
The log shows C++ runtime log.
Can you attach the full log file?
798qvoo821#
@GuoxiaWang
I pasted this code and I tried to run again. But it still display this code below and I don't know what happened
Can you help me?
Thank you.
6mzjoqzu22#
It is to export environment variable in linux terminal.
You also can set by python
vjhs03f723#
@GuoxiaWang
Excuse me,
Can you explain more clearer. I'm using Google Colab for training the pretrained model of PPOCR.
This is the code? I tried to paste this code into my notebook. But it didn't display anything!
Thank you.
krugob8w24#
@dang-nh194423
Please use the VLOG to get more info:
export GLOG_v=3
export FLAGS_call_stack_level=2
And you can put your code below.
js81xvg625#
@GuoxiaWang Yes, I'm here
polhcujo26#
@dang-nh194423
Please use the VLOG to get more info:
And you can put your code below.