ollama/docs/development.md
Michael Yang dcfb7a105c
next build ()
* add build to .dockerignore

* test: only build one arch

* add build to .gitignore

* fix ccache path

* filter amdgpu targets

* only filter if autodetecting

* Don't clobber gpu list for default runner

This ensures the GPU specific environment variables are set properly

* explicitly set CXX compiler for HIP

* Update build_windows.ps1

This isn't complete, but is close.  Dependencies are missing, and it only builds the "default" preset.

* build: add ollama subdir

* add .git to .dockerignore

* docs: update development.md

* update build_darwin.sh

* remove unused scripts

* llm: add cwd and build/lib/ollama to library paths

* default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS

* add additional cmake output vars for msvc

* interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12

* remove unncessary filepath.Dir, cleanup

* add hardware-specific directory to path

* use absolute server path

* build: linux arm

* cmake install targets

* remove unused files

* ml: visit each library path once

* build: skip cpu variants on arm

* build: install cpu targets

* build: fix workflow

* shorter names

* fix rocblas install

* docs: clean up development.md

* consistent build dir removal in development.md

* silence -Wimplicit-function-declaration build warnings in ggml-cpu

* update readme

* update development readme

* llm: update library lookup logic now that there is one runner ()

* tweak development.md

* update docs

* add windows cuda/rocm tests

---------

Co-authored-by: jmorganca <jmorganca@gmail.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
2025-01-29 15:03:38 -08:00

2.4 KiB

Development

Install prerequisites:

  • Go
  • C/C++ Compiler e.g. Clang on macOS, TDM-GCC (Windows amd64) or llvm-mingw (Windows arm64), GCC/Clang on Linux.

Then build and run Ollama from the root directory of the repository:

go run . serve

macOS (Apple Silicon)

macOS Apple Silicon supports Metal which is built-in to the Ollama binary. No additional steps are required.

macOS (Intel)

Install prerequisites:

  • CMake or brew install cmake

Then, configure and build the project:

cmake -B build
cmake --build build

Lastly, run Ollama:

go run . serve

Windows

Install prerequisites:

Important

Ensure prerequisites are in PATH before running CMake.

Important

ROCm is not compatible with Visual Studio CMake generators. Use -GNinja when configuring the project.

Important

CUDA is only compatible with Visual Studio CMake generators.

Then, configure and build the project:

cmake -B build
cmake --build build --config Release

Lastly, run Ollama:

go run . serve

Windows (ARM)

Windows ARM does not support additional acceleration libraries at this time.

Linux

Install prerequisites:

  • CMake or sudo apt install cmake or sudo dnf install cmake
  • (Optional) AMD GPU support
  • (Optional) NVIDIA GPU support

Important

Ensure prerequisites are in PATH before running CMake.

Then, configure and build the project:

cmake -B build
cmake --build build

Lastly, run Ollama:

go run . serve

Docker

docker build .

ROCm

docker build --build-arg FLAVOR=rocm .

Running tests

To run tests, use go test:

go test ./...