mirror of
https://github.com/ollama/ollama.git
synced 2025-11-11 07:27:25 +01:00
New memory estimates (see #11090 for more information) are now enabled automatically for all models running on the Ollama engine, improving both stability and performance through more accurate sizing and allocation. Models running on the llama engine will continue to use the original style of memory estimation.