Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the API,FAQ,Github Issue and AI community to get the answer.Have a nice day!
7条答案
按热度按时间6ovsh4lw1#
您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档、常见问题、历史Issue、AI社区来寻求解答。祝您生活愉快~
Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the API,FAQ,Github Issue and AI community to get the answer.Have a nice day!
8oomwypt2#
在validation的过程中,按照上述的代码也会生成反向的过程,显存不会释放,如果想在validation的过程中释放显存,需要加一个no_grad操作,这样就不会生成反向过程, 具体的api如下
https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/no_grad_cn.html#no-grad
a1o7rhls3#
哦哦明白了,您这个解答了我另一个问题;
还有就是,我其实想问的是,train的过程封装到函数里,函数结束了,里面变量生命周期结束后,会自动释放吗,能否通过某种方式,使得这样写train的过程时,调用完一次这个函数,GPU就释放掉,下个epoch调用train时再分配?
snvhrwxg4#
这里的显存主要是包括了三个方面的显存开销
parameter是全局的变量,因此是在整个程序结束后才释放;前向变量是反向执行完的时候会释放,反向变量是在下一轮前向开始的时候才会释放
u2nhd7ah5#
好的明白了,谢谢您哈
bvpmtnay6#
你好,最近刚开始使用paddlepaddle,也遇到这样的问题,使用paddle训练结束发现显卡显存还被占用着,并未释放,请问是什么原因导致,怎样解决?
zqry0prt7#
有可能是任务异常退出,显存没有来得及释放