System information
-PaddlePaddle version 1.3
-CPU: i7-6700
-GPU: NVIDIA 1080TI CUDA:9.2
-OS Platform Ubuntu 16.04
-Python version 3.5
When I run Pyramid Box model (from widerface_eval.py) on a 1080TI GPU it creates out of memory error (on a single image). I lowered the numbers in the 'get_shrink' function to make it fit the GPU memory (thus doing a smaller resize). Is there another way or environment variable which can control the total GPU memory consumed by fluid? I found someone in another thread who ran this network with an 8GB GPU without problems so I assume this is possible. I'm looking for something similar to Tensorflow's per_process_gpu_memory_fraction which can run a TF process on part of the GPU. Suppose I want to use 80% of the GPU memory, how can I do it in fluid?
3条答案
按热度按时间hi3rlvi21#
@AmitRozner You can remove this code : https://github.com/PaddlePaddle/models/blob/4dc42a621ec5b2f9c369dc8f6b6e9da18bf932e6/fluid/PaddleCV/face_detection/widerface_eval.py#L47-L53
Or use this code:
s4n0splo2#
First, try
export FLAGS_fraction_of_gpu_memory_to_use=
to a smaller value, but this won't change the total used gpu memory.Can you find out how much memory this model need exactly? And can you try to reduce the test set of images?
4ngedf3f3#
@yeyupiaoling Already tried this code but it uses more than 10GB of memory on my GPU. Based on your experience with 8GB GPU there must be a way to lower the GPU consumption.
@Xreki I tried the
FLAGS_fraction_of_gpu_memory_to_use
but it seems to work only for part of the cases and it does not control the total GPU memory used. If I lower it a little bit it seems to work but if I lower it below some number it is completely ignored and takes more than 90% of the GPU. Could you explain how it works?How can I find out how much memory the model needs? I think it is input dependent.