mirror of
https://github.com/ollama/ollama.git
synced 2025-03-27 02:01:56 +01:00
Currently Rows is called as the last step in a model computation to get the values for the output tokens. However, if we move it earlier in the process then we can trim out computations that never get used. This is similar to how models are defined in llama.cpp. Changing the model definition in this way improves token generation performance by approximately 8%.