基于ppdet框架改动后的训练代码报错Error: /paddle/paddle/phi/kernels/gpu/bce_loss_kernel.cu:42 Assertion (x >= static_cast< T>(0)) && (x < = one) failed. Input is expected to be within the interval [0, 1], but recieved nan.

to94eoyn  于 2022-11-13  发布在  其他
关注(0)|答案(4)|浏览(759)

bug描述 Describe the Bug

Error: /paddle/paddle/phi/kernels/gpu/bce_loss_kernel.cu:42 Assertion (x >= static_cast<T>(0)) && (x <= one) failed. Input is expected to be within the interval [0, 1], but recieved nan.

File "/root/paddlejob/workspace/env_run/train/ppdet/modeling/losses/yolo_loss.py", line 194, in forward
self.scale_x_y)
File "/root/paddlejob/workspace/env_run/train/ppdet/modeling/losses/yolo_loss.py", line 177, in yolov3_loss
loss_obj = self.obj_loss(box, gt_box, obj, tobj, anchor, downsample)
File "/root/paddlejob/workspace/env_run/train/ppdet/modeling/losses/yolo_loss.py", line 73, in obj_loss
pbox = decode_yolo(pbox, anchor, downsample)
File "/root/paddlejob/workspace/env_run/train/ppdet/modeling/bbox_utils.py", line 272, in decode_yolo
anchor = paddle.to_tensor(anchor)
File "/usr/local/lib/python3.7/dist-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/wrapped_decorator.py", line 25, in impl
return wrapped_func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/paddle/fluid/framework.py", line 434, in impl
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/paddle/tensor/creation.py", line 189, in to_tensor
stop_gradient=stop_gradient)
OSError: (External) CUDA error(719), unspecified launch failure.
[Hint: 'cudaErrorLaunchFailure'. An exception occurred on the device while executing a kernel. Common causes include dereferencing an invalid device pointerand accessing out of bounds shared memory. Less common cases can be system specific - more information about these cases canbe found in the system specific user guide. This leaves the process in an inconsistent state and any further CUDA work willreturn the same error. To continue using CUDA, the process must be terminated and relaunched.] (at /paddle/paddle/phi/backends/gpu/cuda/cuda_info.cc:258)

terminate called after throwing an instance of 'phi::enforce::EnforceNotMet'
what(): (External) CUDA error(719), unspecified launch failure.
[Hint: 'cudaErrorLaunchFailure'. An exception occurred on the device while executing a kernel. Common causes include dereferencing an invalid device pointerand accessing out of bounds shared memory. Less common cases can be system specific - more information about these cases canbe found in the system specific user guide. This leaves the process in an inconsistent state and any further CUDA work willreturn the same error. To continue using CUDA, the process must be terminated and relaunched.] (at /paddle/paddle/fluid/memory/allocation/cuda_device_context_allocator.h:99)

C++ Traceback (most recent call last):

0 paddle::memory::allocation::CUDADeviceContextAllocatorPool::~CUDADeviceContextAllocatorPool()
1 std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release()

Error Message Summary:

FatalError: Process abort signal is detected by the operating system.
[TimeInfo: *** Aborted at 1657895978 (unix time) try "date -d @1657895978" if you are using GNU date ***]
[SignalInfo: *** SIGABRT (@0x11332) received by PID 70450 (TID 0x7f9ffc534740) from PID 70450 ***]

其他补充信息 Additional Supplementary Information

训练配置:
GPU:2卡A100
lr:0.0004

z9zf31ra

z9zf31ra1#

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看 官网API文档常见问题历史IssueAI社区 来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

edqdpe6u

edqdpe6u2#

改啥了 那个模型

4uqofj5v

4uqofj5v3#

基于yolov3 resnet34。修改大致为:1. 将yolohead里面的anchor改为基于自定义数据聚类得到的anchor;2. 额外增加一个head做其它属性预测

a7qyws3x

a7qyws3x4#

这个跑飞了 尝试把lr调小点再跑一下吧

相关问题