Replies: 2 comments
-
This is no longer an issue with latest update. |
Beta Was this translation helpful? Give feedback.
0 replies
-
This issue occurred again, after an nvidia driver auto update. The solution the second time was to disable the ccache. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
ggml-cuda.cu:3211: ERROR: CUDA kernel vec_dot_q5_K_q8_1_impl_vmmq has no device code compatible with CUDA arch 520. ggml-cuda.cu was compiled for: 520
This worked yesterday. I did a git pull, make clean, make and then get this error today.
nvidia rtx 3090
System: debian testing
Command line:
~/github/llama.cpp/main -m ~/models/miqu-1-70b.q5_K_M.gguf -c 0 -i --color -t 16 --n-gpu-layers 24 --temp 0.8 -p "bob"
I reverted previous two commits and issue went away.
/github/llama.cpp$ git reset --hard HEAD2HEAD is now at 334f76f sync : ggml
Beta Was this translation helpful? Give feedback.
All reactions