The current splitDim function only operates on tensors that are split evenly which isn't always the case, e.g. a QKV tensor. This change allows the function to be used for arbitrary splits
This enables matching up devices and information reported by the backend
with system management libraries such as nvml to get accurate free
memory reporting.
"POST predict" basically means that the runner has crashed, which
can have many reasons. However, many people think this is a specific
error and either report only this message or group together unrelated
bugs. This replaces it with a more friendly and helpful message.
- Both `/api/generate` and `/api/chat` now accept a `"think"`
option that allows specifying whether thinking mode should be on or
not
- Templates get passed this new option so, e.g., qwen3's template can
put `/think` or `/no_think` in the system prompt depending on the
value of the setting
- Models' thinking support is inferred by inspecting model templates.
The prefix and suffix the parser uses to identify thinking support is
also automatically inferred from templates
- Thinking control & parsing is opt-in via the API to prevent breaking
existing API consumers. If the `"think"` option is not specified, the
behavior is unchanged from previous versions of ollama
- Add parsing for thinking blocks in both streaming/non-streaming mode
in both `/generate` and `/chat`
- Update the CLI to make use of these changes. Users can pass `--think`
or `--think=false` to control thinking, or during an interactive
session they can use the commands `/set think` or `/set nothink`
- A `--hidethinking` option has also been added to the CLI. This makes
it easy to use thinking in scripting scenarios like
`ollama run qwen3 --think --hidethinking "my question here"` where you
just want to see the answer but still want the benefits of thinking
models
Computing an attention mask for a large context and max batch is
expensive - over 100ms. Models like Gemma3 that have multiple types
of caches and custom attention masks need to do this 4 times, so this
adds approximately 500ms to startup time when using 128k context
When we are reserving the worst case graph, we don't need the mask,
only its shape, so we can skip this.
This commit updates the README to include macLlama within the community integrations section.
macLlama is a native macOS application built for lightweight and efficient LLM interaction. Key features include:
* **Lightweight & Native:** Designed to be resource-friendly and perform optimally on macOS.
* **Chat-like Interface:** Provides a user-friendly, conversational interface.
* **Multiple Window Support:** Allows users to manage multiple conversations simultaneously.
The primary goal of macLlama is to offer a simple and easy-to-run LLM experience on macOS.
FromFloatSlice and FromIntSlice return an error if the shape doesn't
match the passed data or if memory can't be allocated. Since these
are inputs, the memory being allocated is system memory rather than VRAM.
In many cases, the caller can't really handle the error and panics.
Empty and Zeros directly panic if they can't allocate memory.
This makes things consistent by panicing for the first two cases,
removing a fair amount of error handling code. This is also consistent
with how Go typically handles these situations.
This provides granular information about the backend memory allocations
required by the runner:
- Per backend
- Per layer
- Weights, cache and graph
- Allocation status
This can be used for debugging and validating memory estimates.
GGML has a function to report the allocated size of a backend buffer.
However, this returns 0 if we tried to allocate a buffer and it failed.
For memory management purposes, it's important to know how much we were
trying to allocate. This extends the API to report attempted sizes for
all buffers and whether it succeeeded.
When the same model is being reloaded rapidly with client connections
being canceled before the model finishes loading, the queued unload
event could cause a leak of runners by deleting a different runner from
the loaded list.
* fix mllama convert
- transform attn_gate and ffn_gate
- swap attention heads for vision models
* fix mllama
the mlp gate which was applied in the wrong place
Fall back to alternative quantization types when a tensor's dimensions aren't divisible by the block size required for the original desired quantization type. If retried quantization types fail, the system ultimately falls back to F16 (half-precision floating point) which has a block size of 1 and can handle any tensor dimension.
* remove support for multiple ggufs in a single file
this was an attempt to make it easier to import multimodal models into
ollama. this was rarely used and error prone so remove it
* fix: create fused model from blob
setting samebatch on the vision start token is problematic because it
will be shared with other inputs that also use images. this will cause
the input to be cached and the runner will not see SameBatch. SameBatch
will also be incorrect since it may be for a different image.
assigning samebatch to the input tokens resolves this by ensure it's
assigned correctly to inputs corresponding to the image.
not setting same batch correctly may cause panics during inference since
images are no longer guaranteed to be in the same batch.