vllm [用法]:当v0.5.0版本支持bitsandbytes时,我可以使用vlm.LLM(quantization="bitsandbytes"...)吗?

nkkqxpd9  于 6个月前  发布在  其他
关注(0)|答案(9)|浏览(65)

你当前的环境

The output of `python collect_env.py`
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.3
Libc version: glibc-2.35

Python version: 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L20
Nvidia driver version: 550.54.14
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      52 bits physical, 57 bits virtual
Byte Order:                         Little Endian
CPU(s):                             180
On-line CPU(s) list:                0-179
Vendor ID:                          GenuineIntel
Model name:                         Intel(R) Xeon(R) Platinum 8457C
CPU family:                         6
Model:                              143
Thread(s) per core:                 2
Core(s) per socket:                 45
Socket(s):                          2
Stepping:                           8
BogoMIPS:                           5200.00
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor:                  KVM
Virtualization type:                full
L1d cache:                          4.2 MiB (90 instances)
L1i cache:                          2.8 MiB (90 instances)
L2 cache:                           180 MiB (90 instances)
L3 cache:                           195 MiB (2 instances)
NUMA node(s):                       2
NUMA node0 CPU(s):                  0-89
NUMA node1 CPU(s):                  90-179
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Unknown: No mitigations
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Mitigation; TSX disabled

Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.3
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] sentence-transformers==2.7.0
[pip3] torch==2.3.0
[pip3] torchvision==0.16.2+cu121
[pip3] transformers==4.40.0
[pip3] triton==2.3.0
[pip3] vllm-nccl-cu12==2.18.1.0.4.0
[conda] numpy                     1.26.3                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] sentence-transformers     2.7.0                    pypi_0    pypi
[conda] torch                     2.1.2+cu121              pypi_0    pypi
[conda] torchvision               0.16.2+cu121             pypi_0    pypi
[conda] transformers              4.40.0                   pypi_0    pypi
[conda] triton                    2.3.0                    pypi_0    pypi
[conda] vllm-nccl-cu12            2.18.1.0.4.0             pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    NIC0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     90-179  1               N/A
NIC0    SYS      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0

你希望如何使用vllm

我想使用bitsandbytes运行一个混合模型qlora的推理。我不知道如何将其与vllm集成。

alen0pnh

alen0pnh1#

但实际上,我在vllm/entrypoints/llm.py

from contextlib import contextmanager
from typing import ClassVar, List, Optional, Sequence, Union, cast, overload

from tqdm import tqdm
from transformers import PreTrainedTokenizer, PreTrainedTokenizerFast

from vllm.engine.arg_utils import EngineArgs
from vllm.engine.llm_engine import LLMEngine
from vllm.inputs import (PromptInputs, PromptStrictInputs, TextPrompt,
                         TextTokensPrompt, TokensPrompt,
                         parse_and_batch_prompt)
from vllm.logger import init_logger
from vllm.lora.request import LoRARequest
from vllm.outputs import EmbeddingRequestOutput, RequestOutput
from vllm.pooling_params import PoolingParams
from vllm.sampling_params import SamplingParams
from vllm.transformers_utils.tokenizer import get_cached_tokenizer
from vllm.usage.usage_lib import UsageContext
from vllm.utils import Counter, deprecate_kwargs

logger = init_logger(__name__)

class LLM:
    """An LLM for generating texts from given prompts and sampling parameters.

    This class includes a tokenizer, a language model (possibly distributed
    across multiple GPUs), and GPU memory space allocated for intermediate
    states (aka KV cache). Given a batch of prompts and sampling parameters,
    this class generates texts from the model, using an intelligent batching
    mechanism and efficient memory management.

    Args:
        model: The name or path of a HuggingFace Transformers model.
        tokenizer: The name or path of a HuggingFace Transformers tokenizer.
        tokenizer_mode: The tokenizer mode. "auto" will use the fast tokenizer
            if available, and "slow" will always use the slow tokenizer.
        skip_tokenizer_init: If true, skip initialization of tokenizer and
            detokenizer. Expect valid prompt_token_ids and None for prompt
            from the input.
        trust_remote_code: Trust remote code (e.g., from HuggingFace) when
            downloading the model and tokenizer.
        tensor_parallel_size: The number of GPUs to use for distributed
            execution with tensor parallelism.
        dtype: The data type for the model weights and activations. Currently,
            we support `float32`, `float16`, and `bfloat16`. If `auto`, we use
            the `torch_dtype` attribute specified in the model config file.
            However, if the `torch_dtype` in the config is `float32`, we will
            use `float16` instead.
        quantization: The method used to quantize the model weights. Currently,
            we support "awq", "gptq", "squeezellm", and "fp8" (experimental).
            If None, we first check the `quantization_config` attribute in the
            model config file. If that is None, we assume the model weights are
            not quantized and use `dtype` to determine the data type of
            the weights.

上没有看到它被支持。

os8fio9y

os8fio9y2#

请参阅:https://github.com/vllm-project/vllm/blob/v0.5.0/examples/lora_with_quantization_inference.py#L82

tktrz96b

tktrz96b3#

请注意:https://github.com/vllm-project/vllm/blob/v0.5.0/examples/lora_with_quantization_inference.py#L82
目前看来,VLLM的bitsandbytes仅支持llama模型。

gg58donl

gg58donl5#

它是否支持llama3?

q5iwbnjs

q5iwbnjs6#

它是否支持llama3?
我不确定,我正在尝试用mixtral处理这个问题,但似乎不起作用。

yx2lnoni

yx2lnoni7#

它是否支持llama3?
我不确定,我正在尝试用mixtral处理这个问题,但似乎不起作用。
目前,mixtrail不支持B&B,但llama3应该可以。

sr4lhrrt

sr4lhrrt8#

当我尝试加载 'meta-llama/Meta-Llama-3-70B-Instruct' 时,我得到了以下错误:

File ~/.local/lib/python3.10/site-packages/vllm/model_executor/models/llama.py:435, in LlamaForCausalLM.load_weights(self, weights)
    [433](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/vllm/model_executor/models/llama.py:433)     else:
    [434](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/vllm/model_executor/models/llama.py:434)         name = remapped_kv_scale_name
--> [435](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/vllm/model_executor/models/llama.py:435) param = params_dict[name]
    [436](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/vllm/model_executor/models/llama.py:436) weight_loader = getattr(param, "weight_loader",
    [437](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/vllm/model_executor/models/llama.py:437)                         default_weight_loader)
    [438](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/vllm/model_executor/models/llama.py:438) weight_loader(param, loaded_weight)

KeyError: 'model.layers.46.mlp.down_proj.weight'
mi7gmzs6

mi7gmzs69#

当加载Llama3-8B-Instruct时,我得到了垃圾输出:#5569

相关问题