llama_index 错误

pgpifvop  于 6个月前  发布在  其他
关注(0)|答案(8)|浏览(172)

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

I want to use the following:
LLM: Llama 2 7B chat
Embed Model: sentence-transformers/all-mpnet-base-v2
can anybody provide me a fix
This is the Error:
ValueError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/llama_index/core/embeddings/utils.py in resolve_embed_model(embed_model, callback_manager)
58 embed_model = OpenAIEmbedding()
---> 59 validate_openai_api_key(embed_model.api_key)
60 except ImportError:
5 frames
ValueError: No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/llama_index/core/embeddings/utils.py in resolve_embed_model(embed_model, callback_manager)
64 )
65 except ValueError as e:
---> 66 raise ValueError(
67 "\n******\n"
68 "Could not load OpenAI embedding model. "
ValueError:
Could not load OpenAI embedding model. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys
Consider using embed_model='local'.
Visit our documentation for more embedding options: https://docs.llamaindex.ai/en/stable/module_guides/models/embeddings.html#modules

bvpmtnay

bvpmtnay1#

这是我遇到错误的地方:

from llama_index.core import SummaryIndex, VectorStoreIndex
summary_index = SummaryIndex(nodes)
vector_index = VectorStoreIndex(nodes)

尽管我之前已经设置了llm和嵌入模型。

68bkxrlz

68bkxrlz2#

要解决在使用Llama 2 7B聊天作为LLM和sentence-transformers/all-mpnet-base-v2作为嵌入模型时,与缺少OpenAI API密钥相关的错误,您需要确保代码不会默认使用OpenAI嵌入。以下是如何显式设置嵌入模型的方法:

  1. 确保不默认使用OpenAI嵌入:
    确保您的代码明确将嵌入模型设置为sentence-transformers/all-mpnet-base-v2

  2. 正确设置嵌入模型:
    您需要初始化嵌入模型并将其传递给VectorStoreIndexSummaryIndex
    以下是如何执行此操作的示例:


# 显式设置嵌入模型并确保使用全局服务上下文

embedding_model = `sentence-transformers/all-mpnet-base-v2`()
context = `VectorStoreIndex`(embedding_model=embedding_model)
response = `SummaryIndex`(context=context)

通过显式设置嵌入模型并确保使用全局服务上下文,您应该可以避免回退到OpenAI嵌入以及相关的API密钥错误。

  • 要继续与Dosu交谈,请提及@dosu。*
olmpazwi

olmpazwi3#

在运行上述提供的代码后,我遇到了以下错误:
TypeError Traceback (most recent call last)
in <cell line: 7>()
5
6 # Create a ServiceContext with your embedding model
----> 7 service_context = ServiceContext(embed_model=embed_model)
8
9 # Set the global service context to ensure it is used throughout your code
1 frames
/usr/local/lib/python3.10/dist-packages/pydantic/v1/dataclasses.py in handle_extra_init(self, *args, **kwargs)
291 def handle_extra_init(self: 'Dataclass', *args: Any, **kwargs: Any) -> None:
292 if config.extra == Extra.ignore:
--> 293 init(self, *args, **{k: v for k, v in kwargs.items() if k in self.dataclass_fields})
294
295 elif config.extra == Extra.allow:
TypeError: ServiceContext.init() missing 5 required positional arguments: 'llm_predictor', 'prompt_helper', 'transformations', 'llama_logger', and 'callback_manager'

dz6r00yl

dz6r00yl4#

错误提示表示 ServiceContext 类需要额外的位置参数。要解决这个问题,你可以使用 from_defaults 方法创建一个具有其他参数默认值的 ServiceContext
以下是更新后的代码:

from llama_index import ServiceContext, VectorStoreIndex, SummaryIndex
from sentence_transformers import SentenceTransformer

# Initialize your embedding model
embed_model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2')

# Create a ServiceContext with your embedding model using from_defaults
service_context = ServiceContext.from_defaults(embed_model=embed_model)

# Set the global service context to ensure it is used throughout your code
from llama_index import set_global_service_context
set_global_service_context(service_context)

# Now create your indices
summary_index = SummaryIndex(nodes, service_context=service_context)
vector_index = VectorStoreIndex(nodes, service_context=service_context)

这应该通过使用 from_defaults 方法初始化具有其他所需参数默认值的 ServiceContext 来解决 TypeError

  • 要继续与 Dosu 交流,请提及 @dosu。*
vptzau2j

vptzau2j5#

我想使用Huggingface LLM Llama 2 7B聊天。这个代码是否能够支持它?

wr98u20j

wr98u20j6#

是的,提供的代码将支持使用Huggingface LLM Llama 2.7B聊天。您可以从Huggingface指定Llama 2.7B聊天的模型名称,并在您的设置中使用它。以下是如何做到这一点:

from llama_index import ServiceContext, VectorStoreIndex, SummaryIndex
from sentence_transformers import SentenceTransformer
from llama_index.llms.openvino import OpenVINOLLM

# Initialize your embedding model
embed_model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2')

# Create a ServiceContext with your embedding model using from_defaults
service_context = ServiceContext.from_defaults(embed_model=embed_model)

# Set the global service context to ensure it is used throughout your code
from llama_index import set_global_service_context
set_global_service_context(service_context)

# Initialize the Llama 2 7B chat model
ov_config = {
    "PERFORMANCE_HINT": "LATENCY",
    "NUM_STREAMS": "1",
    "CACHE_DIR": "",
}

llm = OpenVINOLLM(
    model_name="meta-llama/Llama-2-7b-chat-hf",
    tokenizer_name="meta-llama/Llama-2-7b-chat-hf",
    context_window=3900,
    max_new_tokens=256,
    model_kwargs={"ov_config": ov_config},
    generate_kwargs={"temperature": 0.7, "top_k": 50, "top_p": 0.95},
    device_map="auto",
)

# Now create your indices
summary_index = SummaryIndex(nodes, service_context=service_context)
vector_index = VectorStoreIndex(nodes, service_context=service_context)

# Example query
response = llm.complete("What is the meaning of life?")
print(str(response))

此设置确保从Huggingface正确加载和配置了Llama 2.7B聊天模型,以便与LlamaIndex框架一起使用。

  • 要继续与Dosu交谈,请提及@dosu。*
sdnqo3pr

sdnqo3pr7#

仍然没有解决
在使用Hugging Face LLm时仍然显示OpenAI错误

iezvtpos

iezvtpos8#

要确保Huggingface LLM和指定的嵌入模型被正确使用,而不是默认使用OpenAI嵌入,你需要在 ServiceContext 中明确设置嵌入模型和LLM。以下是如何做到这一点的方法:

  1. 确保不默认使用OpenAI嵌入:
    确保你的代码明确将嵌入模型设置为 sentence-transformers/all-mpnet-base-v2

  2. 正确设置嵌入模型和LLM:
    你需要初始化嵌入模型和LLM,并将它们传递给 ServiceContext 。以下是如何做到这一点的示例:


# 初始化嵌入模型和LLM

embedding_model = "`sentence-transformers/all-mpnet-base-v2`"
llm = "`ServiceContext`"

# 将嵌入模型和LLM传递给 `ServiceContext`

result = `ServiceContext`(embedding_model=embedding_model, llm=llm)

通过明确设置嵌入模型和LLM,并确保使用全局服务上下文,你应该可以避免回退到OpenAI嵌入以及相关的API密钥错误 1(https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/embeddings/utils.py)2(https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/tests/embeddings/test_utils.py) 。

  • 要继续与 Dosu 对话,请提及 @dosu。*

相关问题