* implement the vulkan C backend
* add support in gpu.go
* add support in gen_linux.sh
* it builds
* fix segfault
* fix compilation
* fix free memory monitor
* fix total memory monitor
* update gpu.go
* fix build
* fix check_perfmon len
* remove cap_get_bound check
* fix vulkan handle releasing
* fix build on federa 40
* fix vulkan on windows
* making amdgpu work on arm achitecutre with vulkan
* add x86_64 lines in VulkanGlobs and capLinuxGlobs
* add aarch64 lines in vulkanGlobs and capLinuxGlobs
* Fix variable name
* Add vulkan build patch from @jmorganca
* Sync vendored ggml to add Vulkan support
* Updated dockerfile
https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2660836871
Signed-off-by: Vadim Grinco <vadim@grinco.eu>
* Installing rocm library
Signed-off-by: Vadim Grinco <vadim@grinco.eu>
* This version works well
built based on this: https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2660836871
Signed-off-by: Vadim Grinco <vadim@grinco.eu>
* Applied 00-fix-vulkan-building.patch
Work done by McBane87 here: https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2660836871
Signed-off-by: Vadim Grinco <vadim@grinco.eu>
* Fixed the "detached head" issues
Signed-off-by: Vadim Grinco <vadim@grinco.eu>
* Merged in the right direction
Signed-off-by: Vadim Grinco <vadim@grinco.eu>
* Merging the latest stable (#2)
* Applied 00-fix-vulkan-building.patch
* Implemented vulkan backend based on the work done by whyvl, Dts0, McBane87 and others
Tested on AMD Ryzen 7 8845HS w/ Radeon 780M Graphics with ROCm disabled
```
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
time=2025-03-11T13:00:40.793Z level=INFO source=gpu.go:199 msg="vulkan: load libvulkan and libcap ok"
time=2025-03-11T13:00:40.877Z level=INFO source=gpu.go:421 msg="error looking up vulkan GPU memory" error="device is a CPU"
time=2025-03-11T13:00:40.878Z level=WARN source=amd_linux.go:443 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
time=2025-03-11T13:00:40.878Z level=WARN source=amd_linux.go:348 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
time=2025-03-11T13:00:40.879Z level=INFO source=types.go:137 msg="inference compute" id=0 library=vulkan variant="" compute=1.3 driver=1.3 name="AMD Radeon Graphics (RADV GFX1103_R1)" total="15.6 GiB" available="15.6 GiB"
```
```
# ollama run phi4:14b
>>> /set verbose
Set 'verbose' mode.
>>> how's it going?
Hello! I'm here to help you with any questions or tasks you have. How can I assist you today? 😊
total duration: 3.341959745s
load duration: 18.165612ms
prompt eval count: 15 token(s)
prompt eval duration: 475ms
prompt eval rate: 31.58 tokens/s
eval count: 26 token(s)
eval duration: 2.846s
eval rate: 9.14 tokens/s
>>>
```
* This is no longer needed
Signed-off-by: Vadim Grinco <vadim@grinco.eu>
* Fixes SIGSEGV: segmentation violation running gemma3 models on ollama 0.6.0 #21
Patch provided by McBane87 on https://github.com/whyvl/ollama-vulkan/issues/21
Signed-off-by: Vadim Grinco <vadim@grinco.eu>
* Applied 04-disable-mmap-vulkan.patch
From: https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2660836871
Signed-off-by: Vadim Grinco <vadim@grinco.eu>
* Pulled new upstream code for ggml-bulkan backend
Signed-off-by: Vadim Grinco <vadim@grinco.eu>
* Merged latest ollama 0.6.2 and nasrally's Flash Attention patches (#5)
* readme: add Ellama to list of community integrations (#9800)
* readme: add screenpipe to community integrations (#9786)
* Add support for ROCm gfx1151 (#9773)
* conditionally enable parallel pipelines
* sample: make mutations in transforms explicit (#9743)
* updated minP to use early exit making use of sorted tokens
* ml/backend/ggml: allocate memory with malloc when loading model (#9822)
* runner: remove cache prompt flag from ollama runner (#9826)
We do not need to bypass the prompt caching in the ollama runner yet, as
only embedding models needed to bypass the prompt caching. When embedding
models are implemented they can skip initializing this cache completely.
* ollamarunner: Check for minBatch of context space when shifting
Models can specify that a group of inputs need to be handled a single
batch. However, context shifting didn't respect this and could trigger
a break anyways. In this case, we should instead trigger a context
shift earlier so that it occurs before the grouped batch.
Note that there still some corner cases:
- A long prompt that exceeds the context window can get truncated
in the middle of an image. With the current models, this will
result in the model not recognizing the image at all, which is
pretty much the expected result with truncation.
- The context window is set less than the minimum batch size. The
only solution to this is to refuse to load the model with these
settings. However, this can never occur with current models and
default settings.
Since users are unlikely to run into these scenarios, fixing them is
left as a follow up.
* Applied latest patches from McBane87
See this for details: https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2708820861
Signed-off-by: Vadim Grinco <vadim@grinco.eu>
* Add ability to enable flash attention on vulkan (#4)
* discover: add flash attention handling for vulkan
* envconfig: fix typo in config.go
As part of the process some code was refactored and I added a new field
FlashAttention to GpuInfo since the previous solution didn't allow for a
granular check via vulkan extensions. As a side effect, this now allows
for granular per-device FA support checking in other places
---------
Signed-off-by: Vadim Grinco <vadim@grinco.eu>
Co-authored-by: zeo <108888572+zeozeozeo@users.noreply.github.com>
Co-authored-by: Louis Beaumont <louis.beaumont@gmail.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
Co-authored-by: Michael Yang <mxyng@pm.me>
Co-authored-by: Parth Sareen <parth.sareen@ollama.com>
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com>
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Nikita <50599445+nasrally@users.noreply.github.com>
* Revert Readme changes
* Revert
* Revert changes in amd_linux.go
* Revert changes in amd_linux.go
* Remove flashattention setting gpu.go
* Revert whitespace changes in gpu.go
* Revert changes in transforms_test.go
* Revert changes in runner.go
* Revert changes in Makefile.sync
* Revert some unintented changes in Dockerfile
* Revert vulkan copy changes in Dockerfile
* Update Vulkan Code to de4c07f93783a1a96456a44dc16b9db538ee1618
* Fixed duplicate sync in ggml.go
* Revert changes in ggml.go
* Revert chnages in ggml.go
* enable falsh attention on vulkan
* revert remove parenthesis
* fixed flash attention logic enabling
* vk_check_flash_attention 0 means supported
* Update gpu.go
* Add vulkan to Windows Build script
* Remove commented out code
* Enable Vulkan Flash attention in FlashAttentionSupported
* Fix logging
* Update Vulkan backend to e54d41befcc1575f4c898c5ff4ef43970cead75f
* Removed libcap related code
libcap is not directly related to Vulkan and should be added by its own PR. It adds additional library dependencies for building and also requires users to run setcap or run ollama as root, which is not ideal for easy use
* Fix Unit Test (Add Vulkan Library)
* Add vulkan to TestHomogeneousGPUs
Test
* vulkan: get GPU ID (ollama v0.11.5)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
* disable mmap for vulkan
* Reduce Changes remove TestHomogeneousGPUs (doesn't exist on master)
* Update vulkan version to the version used in llama.cpp
* rename gpu patch to correct number
* added Vulkan API to get correct Device UUID
current UUID from pipelineCacheUUID does not match CUDA
* Fix GPU ID Patch
* Remove Code not in llama.cpp
* modified UUID code inside ggml
* Fix Patch
* Copied minimal definition from vulkan header
* Fix compile error in Mac
Metal is preferred so we're disabling Vulkan for now
* Removed unused code
Fix linter error in CI
* Fix patches apply
* fixing lint error
* Removed unneeded function call
Somehow removing this call fixed the crashing when Vulkan header was removed
* added missing NL
* Fixed missing members in Vulkan header
also added zero clear for some structs
* Fixed wrong structure ID
* Fixed Vulkan header
More aligned with official header definition now
* buildvulkanAsSeperateFunction
* Vulkan on Windows Test
* temporarly comment out gate to run windows task
* use temporarly windows-latest for build
* Commenting out other presets to build vulkan
* reenable cpu
* commenting out error action stop
* temporarly commenting out rocm
* set vulkan path
* comment out cude for faster turnaround
* correct vulkan install
* correct vulkan silent install
* fixed install command
* revert debugging changes (vulkan builds on windows)
* revert windows-latest
* trying to build vulkan for linux
* temporarly disable cuda and rocm
* try again linux build
* fix version
* trying to fix
* trying again
* trying again
* fix version
* fixed vulkan-sdk name
* try again
* trying again
* try without version number
* try again
* add some more extra
* trying to use version 1.4.313
* revert debugging changes
* Filter out already supported gpus
* revert debug code
* Use runners for GPU discovery
This revamps how we discover GPUs in the system by leveraging the Ollama
runner. This should eliminate inconsistency between our GPU discovery and the
runners capabilities at runtime, particularly for cases where we try to filter
out unsupported GPUs. Now the runner does that implicitly based on the actual
device list. In some cases free VRAM reporting can be unreliable which can
leaad to scheduling mistakes, so this also includes a patch to leverage more
reliable VRAM reporting libraries if available.
Automatic workarounds have been removed as only one GPU leveraged this, which
is now documented. This GPU will soon fall off the support matrix with the next
ROCm bump.
Additional cleanup of the scheduler and discovery packages can be done in the
future once we have switched on the new memory management code, and removed
support for the llama runner.
* timing info for runner
* WIP - wire up Vulkan with the new engine based discovery
Not a complete implementation - free VRAM is better, but not accurate on
windows
* fix - trust the library paths from discovery when starting runner
* fix index bug
* fix vulkan ids to be underlying
* fix - give bootstrapping more time on slow systems
* Test if Vulkan device is supported
* vk_check_flash_attention is not needed (coompat2 coopmapt and scalar implementation exist)
* Handle GGML_VK_VISIBLE_DEVICES
* ask for supported first
* win: fix CPU query buffer handling
Try in a short loop until we get the size right.
* test: harden integration tests for slow start
If the server takes a while to start up, block
tests from starting until it's online to avoid
setting large timeouts in individual test cases.
* gofumpt fix
* fix build
* merge fixes
* merge fixes
* fixed build
* merge fixes
* fixing build
* fixed build
* fixed formatting
* fixed build
* fix vulkan gpu id patch
* sync llama.cpp vulkan code
* update build windows script
* merge fixes
* fix format
* fixed vulkan casing
* handle igpu as gpu
* improve case
* print out unknown library
* rturn Vulkan for vulkan library
* Revert "rturn Vulkan for vulkan library"
This reverts commit 690461a12f.
* fixed patch number
* return Library Name
* remvoe debug code
* return integrated in vulkan backend
* Return pci Properties
* update patch
* directly get pci proeprties without parsing
* workaround for filtering devices. Correct way is to have a LibraryPosition Parameter in the deviceInfo
* Revert "directly get pci proeprties without parsing"
This reverts commit 8e0624851f.
* Set FilteredID for Environment Filtering
* ROCm Library is named ROCm
* revert changes in patch
* Create 0028-vulkan-pci-and-memory.patch
* vulkan memory patch
* casing fix
* Add more pci properties
* Added better memory management
* Added better memory managament
* fixed patch
* Fixed patch
* FilterID creation group by library
* filter out vulkan supported by other gpu
* fixing deviceid compare
* Vulkan Fix FA coopmat1 invalid array indexing
* Use everywhere the same Vulkan Version 1.4.321.1
* Remove unneeded patch
* vulkan update
* sync vulkan glsl files
* only use for vulkan the filteredid (numeric device number)
* simplify code
---------
Signed-off-by: Vadim Grinco <vadim@grinco.eu>
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
Co-authored-by: pufferffish <github@bandersnatch.anonaddy.com>
Co-authored-by: KOISHI KOMEIJI FROM TOUHOU 11 <fuck>
Co-authored-by: DSLstandard <qgeneral35@gmail.com>
Co-authored-by: pufferffish <me@windtfw.com>
Co-authored-by: yeongbba <yeongmo.lee@logpresso.com>
Co-authored-by: tomaThomas <tomathomas@mailbox.org>
Co-authored-by: Antoine Viallon <antoine@lesviallon.fr>
Co-authored-by: Vadim Grinco <vadim@grinco.eu>
Co-authored-by: zeo <108888572+zeozeozeo@users.noreply.github.com>
Co-authored-by: Louis Beaumont <louis.beaumont@gmail.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
Co-authored-by: Michael Yang <mxyng@pm.me>
Co-authored-by: Parth Sareen <parth.sareen@ollama.com>
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com>
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Nikita <50599445+nasrally@users.noreply.github.com>
Co-authored-by: Masato Nakasaka <masato.nakasaka@intel.com>
Co-authored-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* feat: Bump llama.cpp to df1b612
Branch: LlamaCPPBump-GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(mtmd): Correctly encode text chunks during mtmd tokenization
There can be text chunks that appear interspersed with the image embeddings
that contain template delimiter tokens for some models. These need to be
correctly translated to text tokens.
Branch: LlamaCPPBump-GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* tests: Use MtmdChunk in image_test
Branch: LlamaCPPBump-GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* style: Fix unnecessary conversion linting
Branch: LlamaCPPBump-GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix(ggml): Revert changes to ggml_hip.cpp
These changes were done largely by our code assistant and are likely wrong
Branch: LlamaCPPBump-GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix: Revert changes in mem_nvml.cpp
Branch: LlamaCPPBump-GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Update sync point to 1deee0
This brings in several more optimization commits and model support for
EmbeddingGemma
Branch: LlamaCPPBump-GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Update patches for 1deee0
Branch: LlamaCPPBump-GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: sync for bump to 1deee0
Branch: LlamaCPPBump-GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix: Bad patch updates with errant `+`
Branch: LlamaCPPBump-GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Bump llama.cpp/ggml to 7049736
Branch: LlamaCPPBump-GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix: format-patches after latest bump
Branch: LlamaCPPBump-GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
---------
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
This revamps how we discover GPUs in the system by leveraging the Ollama
runner. This should eliminate inconsistency between our GPU discovery and the
runners capabilities at runtime, particularly for cases where we try to filter
out unsupported GPUs. Now the runner does that implicitly based on the actual
device list. In some cases free VRAM reporting can be unreliable which can
leaad to scheduling mistakes, so this also includes a patch to leverage more
reliable VRAM reporting libraries if available.
Automatic workarounds have been removed as only one GPU leveraged this, which
is now documented. This GPU will soon fall off the support matrix with the next
ROCm bump.
Additional cleanup of the scheduler and discovery packages can be done in the
future once we have switched on the new memory management code, and removed
support for the llama runner.
Ollama's recent engine update, llama.cpp, caused all models requiring a slice schema to not display images. As a result, the value of numTokens isn't always the length of the sliced image embed, but rather the end length of the schema. This causes the image embed to not be correctly included during all slice processing.
This changes the memory allocation strategy from upfront estimation to
tracking actual allocations done by the engine and reacting to that. The
goal is avoid issues caused by both under-estimation (crashing) and
over-estimation (low performance due to under-utilized GPUs).
It is currently opt-in and can be enabled for models running on the
Ollama engine by setting OLLAMA_NEW_ESTIMATES=1. Behavior in other
cases is unchanged and will continue to use the existing estimates.
* TEMPORARY: Update the llama.cpp upstream to my fork's Granite Four branch
This will be redone once my branch is merged upstream in llama.cpp
* feat: Update all patches
There are a number that are no longer needed at all:
- 0003-embeddings: Embeddings entirely overhauled on master
- 0008-ensure-KV-cache-is-fully-defragmented: KV caching entirely
overhauled on master
- 0019-metal-add-mean-kernel-14267: Merged upstream
- 0020-CUDA-add-mean-operation-14313: Merged upstream
* feat: Sync llama.cpp and ggml
* fix: Update rsync-filter for all moved/new/removed files
* fix: Add files missing from sync
* fix: Update ggml rsync-filter for new ggml-cpu/arch subdirs
* fix: Add ggml files missing from sync
* fix: Narrow llama.cpp rsync-filter to not include mtmd main tool cpp files
* fix: Remove mtmd main cpp files
* fix: Add missing include in sampling_ext.cpp
* fix: Update llama.go to use mtmd instead of clip/llava
* fix: Add patch for mtmd_input_text
* chore: Ignore *.patched in the patch directory
* fix: Fix support for arch-specific ggml-cpu source files with new arrangement
In https://github.com/ggml-org/llama.cpp/pull/13892, all arch-specific
implementations were split out into a nested tree structure under
ggml-cpu/arch. This conflicts with standard CGO layout where all
arch-specific source files are expected to live in the same directory as
the parent go module and use suffixes based on GOOS and GOARCH. As such,
there were really two options for getting this to work:
1. Add a patch on top of the GGML sync to rearrange the files to match the
GO layout convention
2. Use CGO directives to conditionally include the nested source files in
the compilation units
This commit does (2) in order to minimize the set of changes needed on top
of the upstream file layout. To get this to work, there are two key things
needed:
1. In cpu.go, #cgo directives are added to explicitly set __${GOARCH}__ in
the preprocessor directives
2. In arch-impls.c|cpp, use an #ifdef | #elif defined | #endif chain to
explicitly include the .c|.cpp files for the given architecture from the
nested directory
* fix: Use mtmd_helper to correctly load the bitmap for the image
* fix: Apply patch for mtmd_text_input
* fix: Add missing stb to llama.cpp rsync-filter
* fix: Add sync'ed stb vendored header
* fix: Use c++17 and include vendor for go wrapper modules
* fix: Update patch 0015 for upstream implementation of uuid
* feat: Bump to the latest tip of the branch
* fix: Update patches for bump
* feat: Bump back to the cenral repo and point at the latest master
This includes granite 4 and a number of other model architectures!
* fix: Revert changes to ggml export GPU UUID patch
* fix: Add patch for GGML_VERSION and GGML_COMMIT constants
* feat: Sync all patched code
* build: Include cmake/common.cmake in ggml sync
* build: Add top-level include for GNUINstallDirs in CMakeLists.txt
This is used to populate CMAKE_INSTALL_BINDIR
* fix: Add a patch to avoid power throttling API on non-msvc windows builds
* fix: Sync patch changes for ggml-cpu.c
* feat: Bump llama.cpp to 4a4f42
This picks up support for Kimi K2 and PLaMO-2
* feat: Sync llama.cpp
* fix: Handle multi-chunk image encodings from mtmd
* fix: Re-number patches after merge with `main`
* feat: Bump to 41e78c in the makefile
* fix: Fix Solar and argsort/copy patches after bump
* fix: Remove Gemma3n CUDA Graphs patch
It was implemented upstream:
https://github.com/ggml-org/llama.cpp/pull/14741
* feat: Sync llama.cpp / ggml after latest bump
* build: Remove unnecessary CFLAGS definitions in cpu.go
* fix: Remove unnecessary additions in the rsync-filter
* fix: Remove unused vendored code for chat template parsing
* Revert "fix: Remove Gemma3n CUDA Graphs patch"
This reverts commit d724caced3.
* fix: Update 0020 CUDA Graphs for gemma3n to keep both llama.cpp and ollama fixes
https://github.com/ollama/ollama/pull/11195#issuecomment-3137312394
* fix: Sync ggml-cuda.cu after keeping both style cuda graph fixes for gemma3n
* unwind mxfp4 patch
Prepare to bump ggml with their impl for mxfp4
* bump
* fix windows build error
* Convert tensors at load time
Repack the mxfp4 tensors as ggmls kernels expect them to be.
* convert mlp bf16 to f32
* buffer the conversion better
* reshape earlier
* openai swiglu
* add ids
* split qkv, gate_up
* fix nested alt tags
* fast attention
* remove debug messages
* fix lint
* remove redundant test
* remap values only if source/target are different
* add back i32->i32 copy
* refactor cpu quants
* clean up vendor
* update patch instructions
* clean up patches
* remove webgpu
* update mem
* also handle gpt-oss
* revert convert changes
---------
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* get eos_token_id from generation_config.json
* refactor
* include both ids and strings in trace
* comments
* remove special case for gemma3 special vocab (#10743)
* Move quantization logic to GGML via new backend
This moves the model aware logic to Go code and calls GGMLs quantization code for model creation.
* Remove "add model quantizations"
This is no longer needed now that quantization is implemented in Go+GGML code directly.
Some options listed in api/types.go are not supported in
newer models, or have been deprecated in the past. This is
the first of a series of PRs to clean up the API options
Clear KV cache when shift operation is not supported by model.
Added KvCacheCanShift() check to handle models that can't perform cache shifts,
falling back to full cache clear while preserving logical token history to
maintain expected behavior when context window fills up.
- output backend system info when initializing the backend. this ensures
this information is always present without needing to be called
explicitly
- convert to structured logging
- enumerate devices rather than backends since devices are ordered
- track device indices grouped by device name
Shield the code processing the embedding result
from subsequent calls that may overwrite the same
buffer to process a second input when retrieving
model embeddings.
* add build to .dockerignore
* test: only build one arch
* add build to .gitignore
* fix ccache path
* filter amdgpu targets
* only filter if autodetecting
* Don't clobber gpu list for default runner
This ensures the GPU specific environment variables are set properly
* explicitly set CXX compiler for HIP
* Update build_windows.ps1
This isn't complete, but is close. Dependencies are missing, and it only builds the "default" preset.
* build: add ollama subdir
* add .git to .dockerignore
* docs: update development.md
* update build_darwin.sh
* remove unused scripts
* llm: add cwd and build/lib/ollama to library paths
* default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS
* add additional cmake output vars for msvc
* interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12
* remove unncessary filepath.Dir, cleanup
* add hardware-specific directory to path
* use absolute server path
* build: linux arm
* cmake install targets
* remove unused files
* ml: visit each library path once
* build: skip cpu variants on arm
* build: install cpu targets
* build: fix workflow
* shorter names
* fix rocblas install
* docs: clean up development.md
* consistent build dir removal in development.md
* silence -Wimplicit-function-declaration build warnings in ggml-cpu
* update readme
* update development readme
* llm: update library lookup logic now that there is one runner (#8587)
* tweak development.md
* update docs
* add windows cuda/rocm tests
---------
Co-authored-by: jmorganca <jmorganca@gmail.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Previously we decoded and re-encoded JSON schemas during validation,
which served no purpose since json.RawMessage already validates JSON
syntax. Worse, the re-encoding lost field ordering from the original
schema, which affects inference quality during step-by-step reasoning.
While fixing this ordering issue by using json.RawMessage directly,
testing revealed that schema_to_grammar (from llama.cpp) also fails to
preserve field order during grammar generation. This appears to be the
root cause of inference degradation.
This change prevents us from mangling the user's original schema order,
but we still need to address the ordering issue in schema_to_grammar.
That will be a separate change.
Updates #7978
The final implementation of #7499 removed dynamic vector requirements
in favor of a simpler filename based model, and this was left over logic that
is no longer needed.
* llama: wire up builtin runner
This adds a new entrypoint into the ollama CLI to run the cgo built runner.
On Mac arm64, this will have GPU support, but on all other platforms it will
be the lowest common denominator CPU build. After we fully transition
to the new Go runners more tech-debt can be removed and we can stop building
the "default" runner via make and rely on the builtin always.
* build: Make target improvements
Add a few new targets and help for building locally.
This also adjusts the runner lookup to favor local builds, then
runners relative to the executable, and finally payloads.
* Support customized CPU flags for runners
This implements a simplified custom CPU flags pattern for the runners.
When built without overrides, the runner name contains the vector flag
we check for (AVX) to ensure we don't try to run on unsupported systems
and crash. If the user builds a customized set, we omit the naming
scheme and don't check for compatibility. This avoids checking
requirements at runtime, so that logic has been removed as well. This
can be used to build GPU runners with no vector flags, or CPU/GPU
runners with additional flags (e.g. AVX512) enabled.
* Use relative paths
If the user checks out the repo in a path that contains spaces, make gets
really confused so use relative paths for everything in-repo to avoid breakage.
* Remove payloads from main binary
* install: clean up prior libraries
This removes support for v0.3.6 and older versions (before the tar bundle)
and ensures we clean up prior libraries before extracting the bundle(s).
Without this change, runners and dependent libraries could leak when we
update and lead to subtle runtime errors.
Fragmentation of the KV cache can occur due to cache shifting or
different sequences getting processed. Decode uses a heuristic to
decide if it should defrag. However, this heuristic isn't 100%
accurate, so decoding can sometimes fail by surprise.
For these cases, if decode indicates that there is no KV cache space,
we should defrag and then try again.
Check for NULL return values from llama.cpp in more places and
convert them into Go errors, which should make debugging easier
in the future rather than having hidden surprises in our data
structures.
Mllama has large embeddings (100 MB per image) and each embedding is
represented as 1 token when passed to llama.cpp. Batches are pre-
allocated for the size of the tokens times the batch size, so this
results in allocations of over 50 GB at the default batch size.
On some systems, these mallocs will fail.
Since an image is represented as a single token and mllama doesn't
support more than 1 image per request, we only need to allocate a
batch size of 1, which is much more reasonable. In addition, for
non-multimodal models, we don't need to allocate the embedding
batches at all.
Fixes#7464
-Update mllama to take the cross attention state as embeddings in
a batch, more similar to how Llava handles it. This improves
integration with the input cache.
-Pass locations in a prompt for embeddings using tags similar to Llava.
-Abstract interface to vision models so the main runner accesses Clip
and Mllama similarly
Co-authored-by: Michael Yang <mxyng@pm.me>
This will no longer error if built with regular gcc on windows. To help
triage issues that may come in related to different compilers, the runner now
reports the compier used by cgo.
Llama.cpp sometimes returns NULL as a return value to report an
error. We should explicitly check for this and convert it to a Go
error rather than putting NULL in our data structures and waiting
for it to blow up later.