Paddle Difference in GPU memory usage between master/slave nodes

b1payxdu  于 2022-11-13  发布在  其他
关注(0)|答案(3)|浏览(126)

请提出你的问题 Please ask your question

Hi,

First of all, thank you for all your work.

  1. I got a small question regarding training multi-gpu. I see that the GPU memory usage on the master node is much less then on the slave nodes. Is this normal behaviour? It seems like there is almost 1GB more free memory on the master node, compared tot he slave nodes.
  2. My training sometimes gets killed with a docker exit code 137. There are no fleetrun logs output, that tells that any error in the code happend. The program just gets killed. Do you have any idea what this might mean?

kind regards

wz3gfoph

wz3gfoph1#

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看 官网API文档常见问题历史IssueAI社区 来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

q43xntqr

q43xntqr2#

Thanks for reporting this case.

  1. Is your data length variable? If your data length is not variable, you can try paddle.device.cuda.empty_cache() to releases all unoccupied cached memory currently and check visible in nvidia-smi.
  2. You can set GLOG_v=3 in your executive script to find the error.
z9smfwbn

z9smfwbn3#

Hi, thank you for your response. I experimented around a little bit and activated GLOG_v=3. The last output I get from the GLOG logging is:

It says: NULLNULLNULLNULLNULL etcetera,

what could this possibly mean?

相关问题