ollama qwen2:72b-instruct-q4_K_M产生垃圾输出

lx0bsm1f  于 4个月前  发布在  其他
关注(0)|答案(5)|浏览(127)

问题是什么?

qwen2:72b-instruct-q4_K_M产生垃圾输出:

>>> hello.
#:<,H=&*1(E.E*>G*:^C

其他模型在其他量化中工作正常。

Ollama服务器输出:

$ ./ollama serve 
2024/07/08 10:07:57 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/home/test/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-08T10:07:57.658+02:00 level=INFO source=images.go:730 msg="total blobs: 30"
time=2024-07-08T10:07:57.661+02:00 level=INFO source=images.go:737 msg="total unused blobs removed: 0"
time=2024-07-08T10:07:57.661+02:00 level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.48)"
time=2024-07-08T10:07:57.662+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama2929279884/runners
time=2024-07-08T10:08:00.530+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60101]"
time=2024-07-08T10:08:00.679+02:00 level=INFO source=types.go:98 msg="inference compute" id=GPU-7b971568-6cd2-b804-d8eb-902eb8689068 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1080 Ti" total="10.9 GiB" available="10.4 GiB"
[GIN] 2024/07/08 - 10:08:22 | 200 |      81.665µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/08 - 10:08:22 | 200 |   37.454138ms |       127.0.0.1 | POST     "/api/show"
time=2024-07-08T10:08:22.768+02:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=14 layers.split="" memory.available="[10.4 GiB]" memory.required.full="49.6 GiB" memory.required.partial="10.2 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[10.2 GiB]" memory.weights.total="43.2 GiB" memory.weights.repeating="42.2 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="313.0 MiB" memory.graph.partial="1.3 GiB"
time=2024-07-08T10:08:22.769+02:00 level=INFO source=server.go:368 msg="starting llama server" cmd="/tmp/ollama2929279884/runners/cuda_v11/ollama_llama_server --model /home/test/.ollama/models/blobs/sha256-59e062dadfebe1e1b7dae3aa2ed6f60190c03e9738451e6963d74a5aa6a464a9 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 14 --no-mmap --parallel 1 --port 45747"
time=2024-07-08T10:08:22.769+02:00 level=INFO source=sched.go:382 msg="loaded runners" count=1
time=2024-07-08T10:08:22.769+02:00 level=INFO source=server.go:556 msg="waiting for llama runner to start responding"
time=2024-07-08T10:08:22.770+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="7c26775" tid="139967113396224" timestamp=1720426102
INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139967113396224" timestamp=1720426102 total_threads=16
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="45747" tid="139967113396224" timestamp=1720426102
llama_model_loader: loaded meta data with 21 key-value pairs and 963 tensors from /home/test/.ollama/models/blobs/sha256-59e062dadfebe1e1b7dae3aa2ed6f60190c03e9738451e6963d74a5aa6a464a9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.name str              = Qwen2-72B-Instruct
llama_model_loader: - kv   2:                          qwen2.block_count u32              = 80
llama_model_loader: - kv   3:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv   4:                     qwen2.embedding_length u32              = 8192
llama_model_loader: - kv   5:                  qwen2.feed_forward_length u32              = 29568
llama_model_loader: - kv   6:                 qwen2.attention.head_count u32              = 64
llama_model_loader: - kv   7:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv   9:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                          general.file_type u32              = 15
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  12:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% for message in messages %}{% if lo...
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  401 tensors
llama_model_loader: - type q5_0:   40 tensors
llama_model_loader: - type q8_0:   40 tensors
llama_model_loader: - type q4_K:  401 tensors
llama_model_loader: - type q5_K:   40 tensors
llama_model_loader: - type q6_K:   41 tensors
time=2024-07-08T10:08:23.021+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 421
llm_load_vocab: token to piece cache size = 0.9352 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 152064
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 8192
llm_load_print_meta: n_head           = 64
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 80
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 29568
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 70B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 72.71 B
llm_load_print_meta: model size       = 44.15 GiB (5.22 BPW) 
llm_load_print_meta: general.name     = Qwen2-72B-Instruct
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes
llm_load_tensors: ggml ctx size =    0.92 MiB
time=2024-07-08T10:08:24.478+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server not responding"
time=2024-07-08T10:08:31.046+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors: offloading 14 repeating layers to GPU
llm_load_tensors: offloaded 14/81 layers to GPU
llm_load_tensors:  CUDA_Host buffer size = 37150.14 MiB
llm_load_tensors:      CUDA0 buffer size =  8063.30 MiB
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:  CUDA_Host KV buffer size =   528.00 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =   112.00 MiB
llama_new_context_with_model: KV self size  =  640.00 MiB, K (f16):  320.00 MiB, V (f16):  320.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.61 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =  1287.53 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    20.01 MiB
llama_new_context_with_model: graph nodes  = 2806
llama_new_context_with_model: graph splits = 928
INFO [main] model loaded | tid="139967113396224" timestamp=1720426189
time=2024-07-08T10:09:49.209+02:00 level=INFO source=server.go:599 msg="llama runner started in 86.44 seconds"
[GIN] 2024/07/08 - 10:09:49 | 200 |         1m26s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/08 - 10:11:28 | 200 | 47.974392692s |       127.0.0.1 | POST     "/api/chat"
[GIN] 2024/07/08 - 10:12:52 | 200 | 20.454650534s |       127.0.0.1 | POST     "/api/chat"

操作系统

Linux

GPU

Nvidia

CPU

Intel

Ollama版本

0.1.48

ui7jx7zq

ui7jx7zq1#

PS: 有趣。我尝试通过在调试模式下运行ollama服务器来提供更多上下文,并且在后续运行中无法重现垃圾输出。似乎存在一些不稳定问题。

q3qa4bjr

q3qa4bjr2#

PPS:似乎提示需要更长一些,以使这个问题更有可能出现。这是调试输出:

$ ./ollama run qwen2:72b-instruct-q4_K_M
>>> Antibiotics are a type of medication used to treat bacterial infections. They work by either killing the bacteria or preventing them from reproducing, allowing the body’s immune system to fight off the infection. Antibiotics are usua
... lly taken orally in the form of pills, capsules, or liquid solutions, or sometimes administered intravenously. They are not effective against viral infections, and using them inappropriately can lead to antibiotic resistance.
... 
... Explain the above in one sentence:
;9:7%;A=<:5-.BDE<7:2D^C
$ OLLAMA_DEBUG=1 ./ollama serve 
2024/07/08 10:36:53 routes.go:1064: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/home/test/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-07-08T10:36:53.249+02:00 level=INFO source=images.go:730 msg="total blobs: 30"
time=2024-07-08T10:36:53.250+02:00 level=INFO source=images.go:737 msg="total unused blobs removed: 0"
time=2024-07-08T10:36:53.251+02:00 level=INFO source=routes.go:1111 msg="Listening on 127.0.0.1:11434 (version 0.1.48)"
time=2024-07-08T10:36:53.252+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama4022880261/runners
time=2024-07-08T10:36:53.253+02:00 level=DEBUG source=payload.go:180 msg=extracting variant=cpu file=build/linux/x86_64/cpu/bin/ollama_llama_server.gz
time=2024-07-08T10:36:53.253+02:00 level=DEBUG source=payload.go:180 msg=extracting variant=cpu_avx file=build/linux/x86_64/cpu_avx/bin/ollama_llama_server.gz
time=2024-07-08T10:36:53.253+02:00 level=DEBUG source=payload.go:180 msg=extracting variant=cpu_avx2 file=build/linux/x86_64/cpu_avx2/bin/ollama_llama_server.gz
time=2024-07-08T10:36:53.253+02:00 level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublas.so.11.gz
time=2024-07-08T10:36:53.253+02:00 level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcublasLt.so.11.gz
time=2024-07-08T10:36:53.253+02:00 level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/libcudart.so.11.0.gz
time=2024-07-08T10:36:53.253+02:00 level=DEBUG source=payload.go:180 msg=extracting variant=cuda_v11 file=build/linux/x86_64/cuda_v11/bin/ollama_llama_server.gz
time=2024-07-08T10:36:53.253+02:00 level=DEBUG source=payload.go:180 msg=extracting variant=rocm_v60101 file=build/linux/x86_64/rocm_v60101/bin/deps.txt.gz
time=2024-07-08T10:36:53.253+02:00 level=DEBUG source=payload.go:180 msg=extracting variant=rocm_v60101 file=build/linux/x86_64/rocm_v60101/bin/ollama_llama_server.gz
time=2024-07-08T10:36:56.052+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/cpu/ollama_llama_server
time=2024-07-08T10:36:56.052+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/cpu_avx/ollama_llama_server
time=2024-07-08T10:36:56.052+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/cpu_avx2/ollama_llama_server
time=2024-07-08T10:36:56.052+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/cuda_v11/ollama_llama_server
time=2024-07-08T10:36:56.052+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/rocm_v60101/ollama_llama_server
time=2024-07-08T10:36:56.052+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cuda_v11 rocm_v60101 cpu cpu_avx cpu_avx2]"
time=2024-07-08T10:36:56.052+02:00 level=DEBUG source=payload.go:45 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-07-08T10:36:56.052+02:00 level=DEBUG source=sched.go:94 msg="starting llm scheduler"
time=2024-07-08T10:36:56.052+02:00 level=DEBUG source=gpu.go:205 msg="Detecting GPUs"
time=2024-07-08T10:36:56.052+02:00 level=DEBUG source=gpu.go:91 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-07-08T10:36:56.052+02:00 level=DEBUG source=gpu.go:435 msg="Searching for GPU library" name=libcuda.so*
time=2024-07-08T10:36:56.052+02:00 level=DEBUG source=gpu.go:454 msg="gpu library search" globs="[/home/test/libcuda.so** /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-07-08T10:36:56.059+02:00 level=DEBUG source=gpu.go:488 msg="discovered GPU libraries" paths="[/usr/lib/i386-linux-gnu/libcuda.so.535.183.01 /usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01]"
library /usr/lib/i386-linux-gnu/libcuda.so.535.183.01 load err: /usr/lib/i386-linux-gnu/libcuda.so.535.183.01: wrong ELF class: ELFCLASS32
time=2024-07-08T10:36:56.059+02:00 level=DEBUG source=gpu.go:517 msg="Unable to load nvcuda" library=/usr/lib/i386-linux-gnu/libcuda.so.535.183.01 error="Unable to load /usr/lib/i386-linux-gnu/libcuda.so.535.183.01 library to query for Nvidia GPUs: /usr/lib/i386-linux-gnu/libcuda.so.535.183.01: wrong ELF class: ELFCLASS32"
CUDA driver version: 12.2
time=2024-07-08T10:36:56.105+02:00 level=DEBUG source=gpu.go:124 msg="detected GPUs" count=1 library=/usr/lib/x86_64-linux-gnu/libcuda.so.535.183.01
[GPU-7b971568-6cd2-b804-d8eb-902eb8689068] CUDA totalMem 11169 mb
[GPU-7b971568-6cd2-b804-d8eb-902eb8689068] CUDA freeMem 10426 mb
[GPU-7b971568-6cd2-b804-d8eb-902eb8689068] Compute Capability 6.1
time=2024-07-08T10:36:56.232+02:00 level=DEBUG source=amd_linux.go:356 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing nvcuda library
time=2024-07-08T10:36:56.232+02:00 level=INFO source=types.go:98 msg="inference compute" id=GPU-7b971568-6cd2-b804-d8eb-902eb8689068 library=cuda compute=6.1 driver=12.2 name="NVIDIA GeForce GTX 1080 Ti" total="10.9 GiB" available="10.2 GiB"
[GIN] 2024/07/08 - 10:37:11 | 200 |      69.083µs |       127.0.0.1 | HEAD     "/"
[GIN] 2024/07/08 - 10:37:11 | 200 |   39.050062ms |       127.0.0.1 | POST     "/api/show"
time=2024-07-08T10:37:11.271+02:00 level=DEBUG source=gpu.go:333 msg="updating system memory data" before.total="46.8 GiB" before.free="41.1 GiB" now.total="46.8 GiB" now.free="41.1 GiB"
CUDA driver version: 12.2
time=2024-07-08T10:37:11.383+02:00 level=DEBUG source=gpu.go:374 msg="updating cuda memory data" gpu=GPU-7b971568-6cd2-b804-d8eb-902eb8689068 name="NVIDIA GeForce GTX 1080 Ti" before.total="10.9 GiB" before.free="10.2 GiB" now.total="10.9 GiB" now.free="10.2 GiB" now.used="742.6 MiB"
releasing nvcuda library
time=2024-07-08T10:37:11.414+02:00 level=DEBUG source=sched.go:169 msg="loading first model" model=/home/test/.ollama/models/blobs/sha256-59e062dadfebe1e1b7dae3aa2ed6f60190c03e9738451e6963d74a5aa6a464a9
time=2024-07-08T10:37:11.414+02:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[10.2 GiB]"
time=2024-07-08T10:37:11.416+02:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[10.2 GiB]"
time=2024-07-08T10:37:11.420+02:00 level=DEBUG source=server.go:98 msg="system memory" total="46.8 GiB" free=44105908224
time=2024-07-08T10:37:11.420+02:00 level=DEBUG source=memory.go:101 msg=evaluating library=cuda gpu_count=1 available="[10.2 GiB]"
time=2024-07-08T10:37:11.421+02:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=13 layers.split="" memory.available="[10.2 GiB]" memory.required.full="49.7 GiB" memory.required.partial="9.7 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[9.7 GiB]" memory.weights.total="43.2 GiB" memory.weights.repeating="42.2 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="313.0 MiB" memory.graph.partial="1.3 GiB"
time=2024-07-08T10:37:11.421+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/cpu/ollama_llama_server
time=2024-07-08T10:37:11.421+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/cpu_avx/ollama_llama_server
time=2024-07-08T10:37:11.421+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/cpu_avx2/ollama_llama_server
time=2024-07-08T10:37:11.421+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/cuda_v11/ollama_llama_server
time=2024-07-08T10:37:11.421+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/rocm_v60101/ollama_llama_server
time=2024-07-08T10:37:11.421+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/cpu/ollama_llama_server
time=2024-07-08T10:37:11.421+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/cpu_avx/ollama_llama_server
time=2024-07-08T10:37:11.421+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/cpu_avx2/ollama_llama_server
time=2024-07-08T10:37:11.421+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/cuda_v11/ollama_llama_server
time=2024-07-08T10:37:11.421+02:00 level=DEBUG source=payload.go:71 msg="availableServers : found" file=/tmp/ollama4022880261/runners/rocm_v60101/ollama_llama_server
time=2024-07-08T10:37:11.421+02:00 level=INFO source=server.go:368 msg="starting llama server" cmd="/tmp/ollama4022880261/runners/cuda_v11/ollama_llama_server --model /home/test/.ollama/models/blobs/sha256-59e062dadfebe1e1b7dae3aa2ed6f60190c03e9738451e6963d74a5aa6a464a9 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 13 --verbose --no-mmap --parallel 1 --port 32925"
time=2024-07-08T10:37:11.421+02:00 level=DEBUG source=server.go:383 msg=subprocess environment="[CUDA_HOME=/usr/local/cuda PATH=/home/test/miniconda3/bin:/home/test/miniconda3/condabin:/home/test/.cargo/bin:/home/test/torch/install/bin:/home/test/Programme/reaver-wps-fork-t6x/src:/home/test/Programme/bully/src:/usr/local/cuda/bin:/home/test/torch/install/bin:/home/test/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/cuda/bin:/snap/bin:/home/test/Programme:/home/test/Dokumente/Devel/backup_scripte:/home/test/Dokumente/Devel/sync_scripts:/home/test/Dokumente/Devel/pype LD_LIBRARY_PATH=/tmp/ollama4022880261/runners/cuda_v11:/tmp/ollama4022880261/runners CUDA_VISIBLE_DEVICES=GPU-7b971568-6cd2-b804-d8eb-902eb8689068]"
time=2024-07-08T10:37:11.422+02:00 level=INFO source=sched.go:382 msg="loaded runners" count=1
time=2024-07-08T10:37:11.422+02:00 level=INFO source=server.go:556 msg="waiting for llama runner to start responding"
time=2024-07-08T10:37:11.422+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="7c26775" tid="140230433636352" timestamp=1720427831
INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140230433636352" timestamp=1720427831 total_threads=16
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="32925" tid="140230433636352" timestamp=1720427831
llama_model_loader: loaded meta data with 21 key-value pairs and 963 tensors from /home/test/.ollama/models/blobs/sha256-59e062dadfebe1e1b7dae3aa2ed6f60190c03e9738451e6963d74a5aa6a464a9 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.name str              = Qwen2-72B-Instruct
llama_model_loader: - kv   2:                          qwen2.block_count u32              = 80
llama_model_loader: - kv   3:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv   4:                     qwen2.embedding_length u32              = 8192
llama_model_loader: - kv   5:                  qwen2.feed_forward_length u32              = 29568
llama_model_loader: - kv   6:                 qwen2.attention.head_count u32              = 64
llama_model_loader: - kv   7:              qwen2.attention.head_count_kv u32              = 8
llama_model_loader: - kv   8:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv   9:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  10:                          general.file_type u32              = 15
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  12:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,152064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,152064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  19:                    tokenizer.chat_template str              = {% for message in messages %}{% if lo...
llama_model_loader: - kv  20:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  401 tensors
llama_model_loader: - type q5_0:   40 tensors
llama_model_loader: - type q8_0:   40 tensors
llama_model_loader: - type q4_K:  401 tensors
llama_model_loader: - type q5_K:   40 tensors
llama_model_loader: - type q6_K:   41 tensors
time=2024-07-08T10:37:11.674+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 421
llm_load_vocab: token to piece cache size = 0.9352 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 152064
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 8192
llm_load_print_meta: n_head           = 64
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 80
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 29568
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 70B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 72.71 B
llm_load_print_meta: model size       = 44.15 GiB (5.22 BPW) 
llm_load_print_meta: general.name     = Qwen2-72B-Instruct
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1, VMM: yes
llm_load_tensors: ggml ctx size =    0.92 MiB
time=2024-07-08T10:37:13.130+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server not responding"
time=2024-07-08T10:37:19.698+02:00 level=INFO source=server.go:594 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors: offloading 13 repeating layers to GPU
llm_load_tensors: offloaded 13/81 layers to GPU
llm_load_tensors:  CUDA_Host buffer size = 37738.62 MiB
llm_load_tensors:      CUDA0 buffer size =  7474.82 MiB
time=2024-07-08T10:37:22.211+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.01"
time=2024-07-08T10:37:23.719+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.04"
time=2024-07-08T10:37:24.473+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.05"
time=2024-07-08T10:37:25.477+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.06"
time=2024-07-08T10:37:25.979+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.07"
time=2024-07-08T10:37:26.733+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.08"
time=2024-07-08T10:37:27.487+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.09"
time=2024-07-08T10:37:28.492+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.10"
time=2024-07-08T10:37:28.995+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.11"
time=2024-07-08T10:37:29.497+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.12"
time=2024-07-08T10:37:30.502+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.13"
time=2024-07-08T10:37:31.005+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.14"
time=2024-07-08T10:37:31.759+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.15"
time=2024-07-08T10:37:32.513+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.16"
time=2024-07-08T10:37:33.267+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.17"
time=2024-07-08T10:37:34.021+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.18"
time=2024-07-08T10:37:34.523+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.19"
time=2024-07-08T10:37:35.529+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.20"
time=2024-07-08T10:37:36.032+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.21"
time=2024-07-08T10:37:36.785+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.22"
time=2024-07-08T10:37:37.288+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.23"
time=2024-07-08T10:37:38.294+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.24"
time=2024-07-08T10:37:38.797+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.25"
time=2024-07-08T10:37:39.802+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.26"
time=2024-07-08T10:37:40.556+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.27"
time=2024-07-08T10:37:41.059+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.28"
time=2024-07-08T10:37:41.813+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.29"
time=2024-07-08T10:37:42.567+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.30"
time=2024-07-08T10:37:43.321+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.31"
time=2024-07-08T10:37:43.824+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.32"
time=2024-07-08T10:37:44.830+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.33"
time=2024-07-08T10:37:45.583+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.34"
time=2024-07-08T10:37:46.086+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.35"
time=2024-07-08T10:37:46.840+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.36"
time=2024-07-08T10:37:47.343+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.37"
time=2024-07-08T10:37:48.348+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.38"
time=2024-07-08T10:37:48.851+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.39"
time=2024-07-08T10:37:49.605+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.40"
time=2024-07-08T10:37:50.610+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.41"
time=2024-07-08T10:37:51.113+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.42"
time=2024-07-08T10:37:51.616+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.43"
time=2024-07-08T10:37:52.370+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.44"
time=2024-07-08T10:37:53.123+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.45"
time=2024-07-08T10:37:53.878+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.46"
time=2024-07-08T10:37:54.632+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.47"
time=2024-07-08T10:37:55.637+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.48"
time=2024-07-08T10:37:56.140+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.49"
time=2024-07-08T10:37:56.642+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.50"
time=2024-07-08T10:37:57.396+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.51"
time=2024-07-08T10:37:58.150+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.52"
time=2024-07-08T10:37:58.904+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.53"
time=2024-07-08T10:37:59.658+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.54"
time=2024-07-08T10:38:00.161+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.55"
time=2024-07-08T10:38:00.916+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.56"
time=2024-07-08T10:38:01.670+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.57"
time=2024-07-08T10:38:02.173+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.58"
time=2024-07-08T10:38:02.927+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.59"
time=2024-07-08T10:38:03.681+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.60"
time=2024-07-08T10:38:04.435+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.61"
time=2024-07-08T10:38:04.938+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.62"
time=2024-07-08T10:38:05.692+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.63"
time=2024-07-08T10:38:06.446+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.64"
time=2024-07-08T10:38:07.200+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.65"
time=2024-07-08T10:38:07.955+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.66"
time=2024-07-08T10:38:08.709+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.67"
time=2024-07-08T10:38:09.462+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.68"
time=2024-07-08T10:38:09.714+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.69"
time=2024-07-08T10:38:10.467+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.70"
time=2024-07-08T10:38:10.970+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.71"
time=2024-07-08T10:38:11.724+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.72"
time=2024-07-08T10:38:12.478+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.73"
time=2024-07-08T10:38:13.232+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.74"
time=2024-07-08T10:38:13.985+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.75"
time=2024-07-08T10:38:14.739+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.76"
time=2024-07-08T10:38:15.242+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.77"
time=2024-07-08T10:38:15.995+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.78"
time=2024-07-08T10:38:16.750+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.79"
time=2024-07-08T10:38:17.504+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.80"
time=2024-07-08T10:38:18.258+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.81"
time=2024-07-08T10:38:18.761+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.82"
time=2024-07-08T10:38:19.766+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.83"
time=2024-07-08T10:38:20.269+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.84"
time=2024-07-08T10:38:21.274+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.85"
time=2024-07-08T10:38:22.028+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.86"
time=2024-07-08T10:38:22.782+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.87"
time=2024-07-08T10:38:23.788+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.88"
time=2024-07-08T10:38:24.291+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.89"
time=2024-07-08T10:38:24.793+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.90"
time=2024-07-08T10:38:25.799+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.91"
time=2024-07-08T10:38:26.804+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.92"
time=2024-07-08T10:38:27.307+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.93"
time=2024-07-08T10:38:27.810+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.94"
time=2024-07-08T10:38:28.564+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.95"
time=2024-07-08T10:38:29.569+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.96"
time=2024-07-08T10:38:30.323+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.97"
time=2024-07-08T10:38:30.825+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.98"
time=2024-07-08T10:38:31.580+02:00 level=DEBUG source=server.go:605 msg="model load progress 0.99"
time=2024-07-08T10:38:32.334+02:00 level=DEBUG source=server.go:605 msg="model load progress 1.00"
llama_new_context_with_model: n_ctx      = 2048
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
time=2024-07-08T10:38:32.586+02:00 level=DEBUG source=server.go:608 msg="model load completed, waiting for server to become available" status="llm server loading model"
llama_kv_cache_init:  CUDA_Host KV buffer size =   536.00 MiB
llama_kv_cache_init:      CUDA0 KV buffer size =   104.00 MiB
llama_new_context_with_model: KV self size  =  640.00 MiB, K (f16):  320.00 MiB, V (f16):  320.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.61 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =  1287.53 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    20.01 MiB
llama_new_context_with_model: graph nodes  = 2806
llama_new_context_with_model: graph splits = 942
DEBUG [initialize] initializing slots | n_slots=1 tid="140230433636352" timestamp=1720427917
DEBUG [initialize] new slot | n_ctx_slot=2048 slot_id=0 tid="140230433636352" timestamp=1720427917
INFO [main] model loaded | tid="140230433636352" timestamp=1720427917
DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="140230433636352" timestamp=1720427917
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=0 tid="140230433636352" timestamp=1720427917
time=2024-07-08T10:38:37.860+02:00 level=INFO source=server.go:599 msg="llama runner started in 86.44 seconds"
time=2024-07-08T10:38:37.860+02:00 level=DEBUG source=sched.go:395 msg="finished setting up runner" model=/home/test/.ollama/models/blobs/sha256-59e062dadfebe1e1b7dae3aa2ed6f60190c03e9738451e6963d74a5aa6a464a9
time=2024-07-08T10:38:37.861+02:00 level=DEBUG source=prompt.go:172 msg="prompt now fits in context window" required=1 window=2048
[GIN] 2024/07/08 - 10:38:37 | 200 |         1m26s |       127.0.0.1 | POST     "/api/chat"
time=2024-07-08T10:38:37.861+02:00 level=DEBUG source=sched.go:399 msg="context for request finished"
time=2024-07-08T10:38:37.861+02:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/test/.ollama/models/blobs/sha256-59e062dadfebe1e1b7dae3aa2ed6f60190c03e9738451e6963d74a5aa6a464a9 duration=5m0s
time=2024-07-08T10:38:37.861+02:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/test/.ollama/models/blobs/sha256-59e062dadfebe1e1b7dae3aa2ed6f60190c03e9738451e6963d74a5aa6a464a9 refCount=0
time=2024-07-08T10:38:55.104+02:00 level=DEBUG source=sched.go:507 msg="evaluating already loaded" model=/home/test/.ollama/models/blobs/sha256-59e062dadfebe1e1b7dae3aa2ed6f60190c03e9738451e6963d74a5aa6a464a9
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=1 tid="140230433636352" timestamp=1720427935
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=2 tid="140230433636352" timestamp=1720427935
DEBUG [log_server_request] request | method="POST" params={} path="/tokenize" remote_addr="127.0.0.1" remote_port=39854 status=200 tid="140229809762304" timestamp=1720427935
time=2024-07-08T10:38:55.204+02:00 level=DEBUG source=prompt.go:172 msg="prompt now fits in context window" required=104 window=2048
time=2024-07-08T10:38:55.204+02:00 level=DEBUG source=routes.go:1367 msg="chat handler" prompt="<|im_start|>user\nAntibiotics are a type of medication used to treat bacterial infections. They work by either killing the bacteria or preventing them from reproducing, allowing the body’s immune system to fight off the infection. Antibiotics are usually taken orally in the form of pills, capsules, or liquid solutions, or sometimes administered intravenously. They are not effective against viral infections, and using them inappropriately can lead to antibiotic resistance.\n\nExplain the above in one sentence:<|im_end|>\n<|im_start|>assistant\n" images=0
time=2024-07-08T10:38:55.204+02:00 level=DEBUG source=server.go:695 msg="setting token limit to 10x num_ctx" num_ctx=2048 num_predict=20480
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=3 tid="140230433636352" timestamp=1720427935
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=4 tid="140230433636352" timestamp=1720427935
DEBUG [update_slots] slot progression | ga_i=0 n_past=0 n_past_se=0 n_prompt_tokens_processed=102 slot_id=0 task_id=4 tid="140230433636352" timestamp=1720427935
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=4 tid="140230433636352" timestamp=1720427935
time=2024-07-08T10:39:18.221+02:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-08T10:39:18.221+02:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/test/.ollama/models/blobs/sha256-59e062dadfebe1e1b7dae3aa2ed6f60190c03e9738451e6963d74a5aa6a464a9 duration=5m0s
time=2024-07-08T10:39:18.221+02:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/test/.ollama/models/blobs/sha256-59e062dadfebe1e1b7dae3aa2ed6f60190c03e9738451e6963d74a5aa6a464a9 refCount=0
[GIN] 2024/07/08 - 10:39:18 | 200 | 23.118534615s |       127.0.0.1 | POST     "/api/chat"
DEBUG [log_server_request] request | method="POST" params={} path="/completion" remote_addr="127.0.0.1" remote_port=39854 status=200 tid="140229809762304" timestamp=1720427958
DEBUG [update_slots] slot released | n_cache_tokens=125 n_ctx=2048 n_past=124 n_system_tokens=0 slot_id=0 task_id=4 tid="140230433636352" timestamp=1720427959 truncated=false
mkshixfv

mkshixfv3#

相同的模型,不同的型号。

quhf5bfb

quhf5bfb4#

也正在经历它 #5641 (评论),使用下面的上下文,但大约为qwen2的> 10k。

a0x5cqrl

a0x5cqrl5#

经过一段时间的正常工作,然后突然开始产生乱码。在使用codegeex和qwen 2时遇到了这个问题。

相关问题