mirror of
https://github.com/ollama/ollama.git
synced 2025-11-10 21:47:42 +01:00
On the llama runner, after the recent GGML bump a new log line reports incorrect 0 MiB free after our patch to remove memory from the props. This adjusts the llama.cpp code to fetch the actual free memory of the active device.