如何指定PyTorch脚本来使用特定的GPU单元?

w3nuxt5m  于 2023-10-20  发布在  其他
关注(0)|答案(1)|浏览(104)

我有一个Python训练脚本,它使用CUDA GPU来训练模型(Kohya Trainer脚本可用here)。它遇到内存不足错误:

OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 1; 23.65 
GiB total capacity; 144.75 MiB already allocated; 2.81 MiB free; 146.00 MiB 
reserved in total by PyTorch) If reserved memory is >> allocated memory try 
setting max_split_size_mb to avoid fragmentation.  See documentation for Memory 
Management and PYTORCH_CUDA_ALLOC_CONF

经过调查,我发现该脚本使用的是GPU单元1,而不是单元0。Unit 1目前使用率很高,没有多少GPU内存,而GPU Unit 0仍然有足够的资源。如何指定脚本以使用GPU单元0?
即使我从改变:

text_encoder.to("cuda")

收件人:

text_encoder.to("cuda:0")

脚本仍在使用GPU单元1,如错误消息中所指定。
nvidia-smi的输出:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.11    Driver Version: 525.60.11    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:81:00.0 Off |                  Off |
| 66%   75C    P2   437W / 450W |   5712MiB / 24564MiB |    100%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  Off  | 00000000:C1:00.0 Off |                  Off |
| 32%   57C    P2   377W / 450W |  23408MiB / 24564MiB |    100%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1947      G   /usr/lib/xorg/Xorg                  4MiB |
|    0   N/A  N/A     30654      C   python                           5704MiB |
|    1   N/A  N/A      1947      G   /usr/lib/xorg/Xorg                  4MiB |
|    1   N/A  N/A     14891      C   python                          23400MiB |
+-----------------------------------------------------------------------------+

更新1

同一台笔记本电脑可以看到2个GPU单元:

import torch
for i in range(torch.cuda.device_count()):
    print(torch.cuda.get_device_properties(i))

其输出:

_CudaDeviceProperties(name='NVIDIA GeForce RTX 4090', major=8, minor=9, total_memory=24217MB, multi_processor_count=128)
_CudaDeviceProperties(name='NVIDIA GeForce RTX 4090', major=8, minor=9, total_memory=24217MB, multi_processor_count=128)

更新2

设置CUDA_VISIBLE_DEVICES=0会导致此错误:
运行时错误:CUDA错误:无效设备序号

eanckbw9

eanckbw91#

我希望这能有所帮助:当我为一个模型分配一个特定的GPU时,我发现Nvidia-Ami输出中的GPU索引与cuda索引不匹配。我的Tesla P40出现在Nvidia-smi输出中的索引0上,但在pytorch代码中被引用为“cuda:2”)或CUDA_VISIBLE_DEVICES=2。

相关问题