mirror of
https://github.com/ollama/ollama.git
synced 2025-11-11 07:07:54 +01:00
If flash attention is enabled without KV cache quanitization, we will currently always get this warning: level=WARN source=server.go:226 msg="kv cache type not supported by model" type=""
15 KiB
15 KiB