我有一些工具可以自动更新我的容器。我使用最新的ollama版本。在最新的镜像更新后,我无法运行任何模型。我可以拉取llama3.1等模型,但是当我尝试运行它们时,它们从未启动。我可以看到我的GPU内存跳到4000mb并停止。通常在运行这些模型时,它更接近6000mb。降级到0.3.3版本后,一切都再次完美运行。
Docker
Nvidia
Intel
最新版本 0.3.4
vybvopom1#
服务器日志将有助于调试。
8yparm6h2#
我删除了最新的容器,并启动了0.3.3版本。当我有空闲时间时,我会切换回最新版本。
9bfwbjaz3#
GPU内存卡在4264MIB左右,模型似乎没有完全加载。如果回到0.3.3版本,一切正常。这是日志:
WARNING: /proc/sys/kernel/numa_balancing is enabled, this has been observed to impair performance INFO [main] build info | build=1 commit="1e6f655" tid="139849002516480" timestamp=1723530846 INFO [main] system info | n_threads=14 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 " tid="139849002516480" timestamp=1723530846 total_threads=14 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="13" port="35375" tid="139849002516480" timestamp=1723530846 time=2024-08-13T02:34:06.516-04:00 level=info source=server.go:627 msg="waiting for server to become available" status="llm server loading model" llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-8eeb52dfb3bb9aefdf9d1ef24b3bdbcfbe82238798c4b918278320b6fcef18fe (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0 general.architecture str = llama llama_model_loader: - kv 1 general.type str = model llama_model_loader: - kv 2 general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3 general.finetune str = Instruct llama_model_loader: - kv 4 general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5 general.size_label str = 8B llama_model_loader: - kv 6 general.license str = llama3.1 llama_model_loader: - kv 7 general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 8 general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", ... llama_model_loader: - kv 9,10,11,12,llama.block_count u32 = 32 llama_model_loader: - kv 12,13,llama.context_length u32 = 131072 llama_model_loader: - kv 14,llama.embedding_length u32 = 4096 llama_model_loader: - llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 8B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 8.03 B llm_load_print_meta: model size = 4.33 GiB (4.64 BPW) llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>' llm_load_print_meta: EOS token = 128009 '<|eot_id|>' llm_load_print_meta: LF token = 128 'Ä' llm_load_print_meta: EOT token = 128009 '<|eot_id|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes llm_load_tensors: ggml ctx size = 0.27 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 281.81 MiB llm_load_tensors: CUDA0 buffer size = 4156.00 MiB
a64a0gku4#
使用0.3.3版本时,我的GPU内存使用量在相同的模型下跳到了5962MIB。
4条答案
按热度按时间vybvopom1#
服务器日志将有助于调试。
8yparm6h2#
我删除了最新的容器,并启动了0.3.3版本。当我有空闲时间时,我会切换回最新版本。
9bfwbjaz3#
GPU内存卡在4264MIB左右,模型似乎没有完全加载。如果回到0.3.3版本,一切正常。这是日志:
a64a0gku4#
使用0.3.3版本时,我的GPU内存使用量在相同的模型下跳到了5962MIB。