Bug描述
当我们尝试将一个微调过的gpt-j模型转换为gptq时,会出现以下错误:
TypeError: forward()缺少1个必需的位置参数:'hidden_states'
硬件详情
A40
软件版本
nvcc --version
nvcc: NVIDIA (R) Cuda编译器驱动程序
版权所有 (c) 2005-2022 NVIDIA公司
创建于Wed_Sep_21_10:33:58_PDT_2022
CUDA编译工具,版本11.8,V11.8.89
构建cuda_11.8.r11.8/compiler.31833905_0
NVIDIA-SMI 535.161.08驱动程序版本:535.161.08 CUDA版本:12.2
有什么建议吗?
3条答案
按热度按时间brccelvz1#
请尝试以下步骤:
git clone
命令安装autogtq。lskq00tm2#
nope:
python convert-to-gptq.py -m ./gpt-cmd -o ./gpt-cmd-gptq
CUDA extension not installed.
CUDA extension not installed.
/home/silvacarl/.local/lib/python3.8/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
Quantizing transformer.h blocks : 0%| | 0/28 [00:00<?, ?it/s]
Traceback (most recent call last):
File "convert-to-gptq.py", line 33, in
quantized_model = quantizer.quantize_model(model, tokenizer)
File "/home/silvacarl/.local/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/silvacarl/.local/lib/python3.8/site-packages/optimum/gptq/quantizer.py", line 505, in quantize_model
block(*layer_inputs[j], **layer_input_kwargs[j])
File "/home/silvacarl/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/silvacarl/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
TypeError: forward() missing 1 required positional argument: 'hidden_states'
will try to build the Docker.
irtuqstp3#
Dockerfile似乎过时了?请尝试以下错误:
这是一个Python包安装失败的错误信息。错误发生在尝试安装名为AutoGPTQ的Python包时,pip在生成元数据时失败了。这可能是由于网络问题、权限问题或者包本身的问题导致的。你可以尝试以下方法解决这个问题: