[Bug]:我无法在两个GPU上使用vllm lora,但一个GPU是可以的,

anauzrmj  于 4个月前  发布在  其他
关注(0)|答案(1)|浏览(46)

当你设置tensor_parallel_size=2时,程序报错是因为在多GPU并行计算中,局部变量lora_b_k没有被赋值。这可能是因为在某些情况下,lora_b_k没有被正确地初始化。

为了解决这个问题,你可以尝试在lora/layers.py文件中的slice_lora_b函数中添加一个判断条件,确保lora_b_k在使用之前已经被赋值。例如:

def slice_lora_b(self, lora_b):
    lora_b_q, lora_b_k, lora_b_v = lora_b
    if lora_b_k is None:
        lora_b_k = self.init_lora_b_k()
    return [lora_b_q, lora_b_k, lora_b_v]

同时,你需要在lora/worker_manager.py文件中的set_active_loras函数中调用self.init_lora_b_k()方法来初始化lora_b_k:

def set_active_loras(self, lora_requests, lora_mapping):
    # ... 其他代码 ...
    for lora in loras:
        self.init_lora_b_k()
        # ... 其他代码 ...

这样,在每次调用set_active_loras函数时,都会确保lora_b_k已经被正确地初始化。

assert len(req_sample_output_strs) == 1
response = req_sample_output_strs[0][len(prompt_str):]
print(response)
print(f'len:{len(response)}')

os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
model_path = r"/home/e5/xxwriter/Models/llm/Qwen1.5-7B-Chat"

model_path = r"/home/e5/xxwriter/Models/llm/Qwen-7B-Chat"

lora_path = r'/home/e5/xxwriter/Models/lora/qwen1-5-others-checkpoint-1400'

lora_path = r'/home/e5/xxwriter/Models/lora/qwen-7b-xixi/qwen-7b-kynow-title-to-artical-@@@-20240426/checkpoint-500'

llm = LLM(model_path,
enable_lora=True,
# max_model_len=8192,
trust_remote_code=True,
# gpu_memory_utilization = 0.8,
tensor_parallel_size=2,# gpu num
dtype="float16",
)
while True:
prompt = input('write the request:')
get_lora_reqonse(prompt, llm)`

相关问题