PytorchAssert错误:Torch未在启用CUDA的情况下编译

pftdvrlh  于 2023-05-17  发布在  其他
关注(0)|答案(6)|浏览(255)

我正在尝试从this repo运行代码。我已经禁用了cuda通过更改行39/40在main.py从

parser.add_argument('--type', default='torch.cuda.FloatTensor', help='type of tensor - e.g torch.cuda.HalfTensor')

parser.add_argument('--type', default='torch.FloatTensor', help='type of tensor - e.g torch.HalfTensor')

尽管如此,运行代码会出现以下异常:

Traceback (most recent call last):
  File "main.py", line 190, in <module>
    main()
  File "main.py", line 178, in main
    model, train_data, training=True, optimizer=optimizer)
  File "main.py", line 135, in forward
    for i, (imgs, (captions, lengths)) in enumerate(data):
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 201, in __next__
    return self._process_next_batch(batch)
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 221, in _process_next_batch
    raise batch.exc_type(batch.exc_msg)
AssertionError: Traceback (most recent call last):
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 62, in _pin_memory_loop
    batch = pin_memory_batch(batch)
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 123, in pin_memory_batch
    return [pin_memory_batch(sample) for sample in batch]
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 123, in <listcomp>
    return [pin_memory_batch(sample) for sample in batch]
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 117, in pin_memory_batch
    return batch.pin_memory()
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/tensor.py", line 82, in pin_memory
    return type(self)().set_(storage.pin_memory()).view_as(self)
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/storage.py", line 83, in pin_memory
    allocator = torch.cuda._host_allocator()
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 220, in _host_allocator
    _lazy_init()
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 84, in _lazy_init
    _check_driver()
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 51, in _check_driver
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

花了一些时间在Pytorch github中查看问题,但无济于事。帮帮忙,好吗?

quhf5bfb

quhf5bfb1#

删除.cuda()对我在macOS上很有效。

pqwbnv8z

pqwbnv8z2#

如果查看data.py文件,可以看到以下函数:

def get_iterator(data, batch_size=32, max_length=30, shuffle=True, num_workers=4, pin_memory=True):
    cap, vocab = data
    return torch.utils.data.DataLoader(
        cap,
        batch_size=batch_size, shuffle=shuffle,
        collate_fn=create_batches(vocab, max_length),
        num_workers=num_workers, pin_memory=pin_memory)

该函数在www.example.com文件中被调用两次main.py,以获取train和dev数据的迭代器。如果你在pytorch中看到DataLoader类,有一个名为的参数:
pin_memory(bool,可选)-如果为True,则数据加载器将在返回Tensor之前将Tensor复制到CUDA固定内存中。
get_iterator函数中默认为True。结果你得到了这个错误。在调用get_iterator函数时,可以简单地将pin_memory参数值作为False传递,如下所示。

train_data = get_iterator(get_coco_data(vocab, train=True),
                          batch_size=args.batch_size,
                          ...,
                          ...,
                          ...,
                          pin_memory=False)
kqlmhetl

kqlmhetl3#

在我的例子中,我没有在Anaconda环境中安装启用Cuda的PyTorch。请注意,您需要一个支持CUDA的GPU才能工作。
按照此链接为您拥有的特定版本的Cuda安装PyTorch:https://pytorch.org/get-started/locally/
我安装了这个版本:conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch

pxiryf3j

pxiryf3j4#

所以我在用一台Mac电脑,尝试用cuda创建一个神经网络,就像

net = nn.Sequential(
    nn.Linear(28*28, 100),
    nn.ReLU(),
    nn.Linear(100, 100),
    nn.ReLU(),
    nn.Linear(100, 10),
    nn.LogSoftmax()
).cuda()

我的错误是我试图创建nn,而Mac没有CUDA。因此,如果任何人面临同样的问题,只需删除.cuda(),您的代码应该可以工作。
编辑:
如果没有CUDA,你就无法进行GPU计算。不幸的是,对于拥有英特尔集成显卡的人来说,CUDA无法安装,因为它只兼容NVIDIA GPU。
如果你有一个NVIDIA显卡,这可能是CUDA已经安装在我们的系统上,如果没有,你可以安装它。
您可以购买与计算机兼容的外部图形,但仅这一项就需要大约300美元,更不用说连接问题了。
否则,您可以用途:Google-协作室,Kaggle内核(免费)
AWS、GCP(免费点数)、PaperSpace(付费)

sz81bmfz

sz81bmfz5#

在使用Detectron2时遇到此问题时,添加以下代码可修复此问题。

cfg.MODEL.DEVICE = "cpu"
kkih6yb8

kkih6yb86#

  • 激活焊炬安装的正确环境。

相关问题