docs: add docs for docs.ollama.com (#12805)
63
docs/api/authentication.mdx
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: Authentication
|
||||
---
|
||||
|
||||
No authentication is required when accessing Ollama's API locally via `http://localhost:11434`.
|
||||
|
||||
Authentication is required for the following:
|
||||
|
||||
* Running cloud models via ollama.com
|
||||
* Publishing models
|
||||
* Downloading private models
|
||||
|
||||
Ollama supports two authentication methods:
|
||||
|
||||
* **Signing in**: sign in from your local installation, and Ollama will automatically take care of authenticating requests to ollama.com when running commands
|
||||
* **API keys**: API keys for programmatic access to ollama.com's API
|
||||
|
||||
## Signing in
|
||||
|
||||
To sign in to ollama.com from your local installation of Ollama, run:
|
||||
|
||||
```
|
||||
ollama signin
|
||||
```
|
||||
|
||||
Once signed in, Ollama will automatically authenticate commands as required:
|
||||
|
||||
```
|
||||
ollama run gpt-oss:120b-cloud
|
||||
```
|
||||
|
||||
Similarly, when accessing a local API endpoint that requires cloud access, Ollama will automatically authenticate the request:
|
||||
|
||||
```shell
|
||||
curl http://localhost:11434/api/generate -d '{
|
||||
"model": "gpt-oss:120b-cloud",
|
||||
"prompt": "Why is the sky blue?"
|
||||
}'
|
||||
```
|
||||
|
||||
## API keys
|
||||
|
||||
For direct access to ollama.com's API served at `https://ollama.com/api`, authentication via API keys is required.
|
||||
|
||||
First, create an [API key](https://ollama.com/settings/keys), then set the `OLLAMA_API_KEY` environment variable:
|
||||
|
||||
```shell
|
||||
export OLLAMA_API_KEY=your_api_key
|
||||
```
|
||||
|
||||
Then use the API key in the Authorization header:
|
||||
|
||||
```shell
|
||||
curl https://ollama.com/api/generate \
|
||||
-H "Authorization: Bearer $OLLAMA_API_KEY" \
|
||||
-d '{
|
||||
"model": "gpt-oss:120b",
|
||||
"prompt": "Why is the sky blue?",
|
||||
"stream": false
|
||||
}'
|
||||
```
|
||||
|
||||
API keys don't currently expire, however you can revoke them at any time in your [API keys settings](https://ollama.com/settings/keys).
|
||||
36
docs/api/errors.mdx
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
title: Errors
|
||||
---
|
||||
|
||||
## Status codes
|
||||
|
||||
Endpoints return appropriate HTTP status codes based on the success or failure of the request in the HTTP status line (e.g. `HTTP/1.1 200 OK` or `HTTP/1.1 400 Bad Request`). Common status codes are:
|
||||
|
||||
- `200`: Success
|
||||
- `400`: Bad Request (missing parameters, invalid JSON, etc.)
|
||||
- `404`: Not Found (model doesn't exist, etc.)
|
||||
- `429`: Too Many Requests (e.g. when a rate limit is exceeded)
|
||||
- `500`: Internal Server Error
|
||||
- `502`: Bad Gateway (e.g. when a cloud model cannot be reached)
|
||||
|
||||
## Error messages
|
||||
|
||||
Errors are returned in the `application/json` format with the following structure, with the error message in the `error` property:
|
||||
|
||||
```json
|
||||
{
|
||||
"error": "the model failed to generate a response"
|
||||
}
|
||||
```
|
||||
|
||||
## Errors that occur while streaming
|
||||
|
||||
If an error occurs mid-stream, the error will be returned as an object in the `application/x-ndjson` format with an `error` property. Since the response has already started, the status code of the response will not be changed.
|
||||
|
||||
```json
|
||||
{"model":"gemma3","created_at":"2025-10-26T17:21:21.196249Z","response":" Yes","done":false}
|
||||
{"model":"gemma3","created_at":"2025-10-26T17:21:21.207235Z","response":".","done":false}
|
||||
{"model":"gemma3","created_at":"2025-10-26T17:21:21.219166Z","response":"I","done":false}
|
||||
{"model":"gemma3","created_at":"2025-10-26T17:21:21.231094Z","response":"can","done":false}
|
||||
{"error":"an error was encountered while running the model"}
|
||||
```
|
||||
1872
docs/api/index.mdx
@@ -1,9 +1,8 @@
|
||||
# OpenAI compatibility
|
||||
---
|
||||
title: OpenAI compatibility
|
||||
---
|
||||
|
||||
> [!NOTE]
|
||||
> OpenAI compatibility is experimental and is subject to major adjustments including breaking changes. For fully-featured access to the Ollama API, see the Ollama [Python library](https://github.com/ollama/ollama-python), [JavaScript library](https://github.com/ollama/ollama-js) and [REST API](https://github.com/ollama/ollama/blob/main/docs/api.md).
|
||||
|
||||
Ollama provides experimental compatibility with parts of the [OpenAI API](https://platform.openai.com/docs/api-reference) to help connect existing applications to Ollama.
|
||||
Ollama provides compatibility with parts of the [OpenAI API](https://platform.openai.com/docs/api-reference) to help connect existing applications to Ollama.
|
||||
|
||||
## Usage
|
||||
|
||||
@@ -100,19 +99,19 @@ except Exception as e:
|
||||
### OpenAI JavaScript library
|
||||
|
||||
```javascript
|
||||
import OpenAI from 'openai'
|
||||
import OpenAI from "openai";
|
||||
|
||||
const openai = new OpenAI({
|
||||
baseURL: 'http://localhost:11434/v1/',
|
||||
baseURL: "http://localhost:11434/v1/",
|
||||
|
||||
// required but ignored
|
||||
apiKey: 'ollama',
|
||||
})
|
||||
apiKey: "ollama",
|
||||
});
|
||||
|
||||
const chatCompletion = await openai.chat.completions.create({
|
||||
messages: [{ role: 'user', content: 'Say this is a test' }],
|
||||
model: 'llama3.2',
|
||||
})
|
||||
messages: [{ role: "user", content: "Say this is a test" }],
|
||||
model: "llama3.2",
|
||||
});
|
||||
|
||||
const response = await openai.chat.completions.create({
|
||||
model: "llava",
|
||||
@@ -123,26 +122,27 @@ const response = await openai.chat.completions.create({
|
||||
{ type: "text", text: "What's in this image?" },
|
||||
{
|
||||
type: "image_url",
|
||||
image_url: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC",
|
||||
image_url:
|
||||
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAYAAADBPx+VAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAA3VSURBVHgB7Z27r0zdG8fX743i1bi1ikMoFMQloXRpKFFIqI7LH4BEQ+NWIkjQuSWCRIEoULk0gsK1kCBI0IhrQVT7tz/7zZo888yz1r7MnDl7z5xvsjkzs2fP3uu71nNfa7lkAsm7d++Sffv2JbNmzUqcc8m0adOSzZs3Z+/XES4ZckAWJEGWPiCxjsQNLWmQsWjRIpMseaxcuTKpG/7HP27I8P79e7dq1ars/yL4/v27S0ejqwv+cUOGEGGpKHR37tzJCEpHV9tnT58+dXXCJDdECBE2Ojrqjh071hpNECjx4cMHVycM1Uhbv359B2F79+51586daxN/+pyRkRFXKyRDAqxEp4yMlDDzXG1NPnnyJKkThoK0VFd1ELZu3TrzXKxKfW7dMBQ6bcuWLW2v0VlHjx41z717927ba22U9APcw7Nnz1oGEPeL3m3p2mTAYYnFmMOMXybPPXv2bNIPpFZr1NHn4HMw0KRBjg9NuRw95s8PEcz/6DZELQd/09C9QGq5RsmSRybqkwHGjh07OsJSsYYm3ijPpyHzoiacg35MLdDSIS/O1yM778jOTwYUkKNHWUzUWaOsylE00MyI0fcnOwIdjvtNdW/HZwNLGg+sR1kMepSNJXmIwxBZiG8tDTpEZzKg0GItNsosY8USkxDhD0Rinuiko2gfL/RbiD2LZAjU9zKQJj8RDR0vJBR1/Phx9+PHj9Z7REF4nTZkxzX4LCXHrV271qXkBAPGfP/atWvu/PnzHe4C97F48eIsRLZ9+3a3f/9+87dwP1JxaF7/3r17ba+5l4EcaVo0lj3SBq5kGTJSQmLWMjgYNei2GPT1MuMqGTDEFHzeQSP2wi/jGnkmPJ/nhccs44jvDAxpVcxnq0F6eT8h4ni/iIWpR5lPyA6ETkNXoSukvpJAD3AsXLiwpZs49+fPn5ke4j10TqYvegSfn0OnafC+Tv9ooA/JPkgQysqQNBzagXY55nO/oa1F7qvIPWkRL12WRpMWUvpVDYmxAPehxWSe8ZEXL20sadYIozfmNch4QJPAfeJgW3rNsnzphBKNJM2KKODo1rVOMRYik5ETy3ix4qWNI81qAAirizgMIc+yhTytx0JWZuNI03qsrgWlGtwjoS9XwgUhWGyhUaRZZQNNIEwCiXD16tXcAHUs79co0vSD8rrJCIW98pzvxpAWyyo3HYwqS0+H0BjStClcZJT5coMm6D2LOF8TolGJtK9fvyZpyiC5ePFi9nc/oJU4eiEP0jVoAnHa9wyJycITMP78+eMeP37sXrx44d6+fdt6f82aNdkx1pg9e3Zb5W+RSRE+n+VjksQWifvVaTKFhn5O8my63K8Qabdv33b379/PiAP//vuvW7BggZszZ072/+TJk91YgkafPn166zXB1rQHFvouAWHq9z3SEevSUerqCn2/dDCeta2jxYbr69evk4MHDyY7d+7MjhMnTiTPnz9Pfv/+nfQT2ggpO2dMF8cghuoM7Ygj5iWCqRlGFml0QC/ftGmTmzt3rmsaKDsgBSPh0/8yPeLLBihLkOKJc0jp8H8vUzcxIA1k6QJ/c78tWEyj5P3o4u9+jywNPdJi5rAH9x0KHcl4Hg570eQp3+vHXGyrmEeigzQsQsjavXt38ujRo44LQuDDhw+TW7duRS1HGgMxhNXHgflaNTOsHyKvHK5Ijo2jbFjJBQK9YwFd6RVMzfgRBmEfP37suBBm/p49e1qjEP2mwTViNRo0VJWH1deMXcNK08uUjVUu7s/zRaL+oLNxz1bpANco4npUgX4G2eFbpDFyQoQxojBCpEGSytmOH8qrH5Q9vuzD6ofQylkCUmh8DBAr+q8JCyVNtWQIidKQE9wNtLSQnS4jDSsxNHogzFuQBw4cyM61UKVsjfr3ooBkPSqqQHesUPWVtzi9/vQi1T+rJj7WiTz4Pt/l3LxUkr5P2VYZaZ4URpsE+st/dujQoaBBYokbrz/8TJNQYLSonrPS9kUaSkPeZyj1AWSj+d+VBoy1pIWVNed8P0Ll/ee5HdGRhrHhR5GGN0r4LGZBaj8oFDJitBTJzIZgFcmU0Y8ytWMZMzJOaXUSrUs5RxKnrxmbb5YXO9VGUhtpXldhEUogFr3IzIsvlpmdosVcGVGXFWp2oU9kLFL3dEkSz6NHEY1sjSRdIuDFWEhd8KxFqsRi1uM/nz9/zpxnwlESONdg6dKlbsaMGS4EHFHtjFIDHwKOo46l4TxSuxgDzi+rE2jg+BaFruOX4HXa0Nnf1lwAPufZeF8/r6zD97WK2qFnGjBxTw5qNGPxT+5T/r7/7RawFC3j4vTp09koCxkeHjqbHJqArmH5UrFKKksnxrK7FuRIs8STfBZv+luugXZ2pR/pP9Ois4z+TiMzUUkUjD0iEi1fzX8GmXyuxUBRcaUfykV0YZnlJGKQpOiGB76x5GeWkWWJc3mOrK6S7xdND+W5N6XyaRgtWJFe13GkaZnKOsYqGdOVVVbGupsyA/l7emTLHi7vwTdirNEt0qxnzAvBFcnQF16xh/TMpUuXHDowhlA9vQVraQhkudRdzOnK+04ZSP3DUhVSP61YsaLtd/ks7ZgtPcXqPqEafHkdqa84X6aCeL7YWlv6edGFHb+ZFICPlljHhg0bKuk0CSvVznWsotRu433alNdFrqG45ejoaPCaUkWERpLXjzFL2Rpllp7PJU2a/v7Ab8N05/9t27Z16KUqoFGsxnI9EosS2niSYg9SpU6B4JgTrvVW1flt1sT+0ADIJU2maXzcUTraGCRaL1Wp9rUMk16PMom8QhruxzvZIegJjFU7LLCePfS8uaQdPny4jTTL0dbee5mYokQsXTIWNY46kuMbnt8Kmec+LGWtOVIl9cT1rCB0V8WqkjAsRwta93TbwNYoGKsUSChN44lgBNCoHLHzquYKrU6qZ8lolCIN0Rh6cP0Q3U6I6IXILYOQI513hJaSKAorFpuHXJNfVlpRtmYBk1Su1obZr5dnKAO+L10Hrj3WZW+E3qh6IszE37F6EB+68mGpvKm4eb9bFrlzrok7fvr0Kfv727dvWRmdVTJHw0qiiCUSZ6wCK+7XL/AcsgNyL74DQQ730sv78Su7+t/A36MdY0sW5o40ahslXr58aZ5HtZB8GH64m9EmMZ7FpYw4T6QnrZfgenrhFxaSiSGXtPnz57e9TkNZLvTjeqhr734CNtrK41L40sUQckmj1lGKQ0rC37x544r8eNXRpnVE3ZZY7zXo8NomiO0ZUCj2uHz58rbXoZ6gc0uA+F6ZeKS/jhRDUq8MKrTho9fEkihMmhxtBI1DxKFY9XLpVcSkfoi8JGnToZO5sU5aiDQIW716ddt7ZLYtMQlhECdBGXZZMWldY5BHm5xgAroWj4C0hbYkSc/jBmggIrXJWlZM6pSETsEPGqZOndr2uuuR5rF169a2HoHPdurUKZM4CO1WTPqaDaAd+GFGKdIQkxAn9RuEWcTRyN2KSUgiSgF5aWzPTeA/lN5rZubMmR2bE4SIC4nJoltgAV/dVefZm72AtctUCJU2CMJ327hxY9t7EHbkyJFseq+EJSY16RPo3Dkq1kkr7+q0bNmyDuLQcZBEPYmHVdOBiJyIlrRDq41YPWfXOxUysi5fvtyaj+2BpcnsUV/oSoEMOk2CQGlr4ckhBwaetBhjCwH0ZHtJROPJkyc7UjcYLDjmrH7ADTEBXFfOYmB0k9oYBOjJ8b4aOYSe7QkKcYhFlq3QYLQhSidNmtS2RATwy8YOM3EQJsUjKiaWZ+vZToUQgzhkHXudb/PW5YMHD9yZM2faPsMwoc7RciYJXbGuBqJ1UIGKKLv915jsvgtJxCZDubdXr165mzdvtr1Hz5LONA8jrUwKPqsmVesKa49S3Q4WxmRPUEYdTjgiUcfUwLx589ySJUva3oMkP6IYddq6HMS4o55xBJBUeRjzfa4Zdeg56QZ43LhxoyPo7Lf1kNt7oO8wWAbNwaYjIv5lhyS7kRf96dvm5Jah8vfvX3flyhX35cuX6HfzFHOToS1H4BenCaHvO8pr8iDuwoUL7tevX+b5ZdbBair0xkFIlFDlW4ZknEClsp/TzXyAKVOmmHWFVSbDNw1l1+4f90U6IY/q4V27dpnE9bJ+v87QEydjqx/UamVVPRG+mwkNTYN+9tjkwzEx+atCm/X9WvWtDtAb68Wy9LXa1UmvCDDIpPkyOQ5ZwSzJ4jMrvFcr0rSjOUh+GcT4LSg5ugkW1Io0/SCDQBojh0hPlaJdah+tkVYrnTZowP8iq1F1TgMBBauufyB33x1v+NWFYmT5KmppgHC+NkAgbmRkpD3yn9QIseXymoTQFGQmIOKTxiZIWpvAatenVqRVXf2nTrAWMsPnKrMZHz6bJq5jvce6QK8J1cQNgKxlJapMPdZSR64/UivS9NztpkVEdKcrs5alhhWP9NeqlfWopzhZScI6QxseegZRGeg5a8C3Re1Mfl1ScP36ddcUaMuv24iOJtz7sbUjTS4qBvKmstYJoUauiuD3k5qhyr7QdUHMeCgLa1Ear9NquemdXgmum4fvJ6w1lqsuDhNrg1qSpleJK7K3TF0Q2jSd94uSZ60kK1e3qyVpQK6PVWXp2/FC3mp6jBhKKOiY2h3gtUV64TWM6wDETRPLDfSakXmH3w8g9Jlug8ZtTt4kVF0kLUYYmCCtD/DrQ5YhMGbA9L3ucdjh0y8kOHW5gU/VEEmJTcL4Pz/f7mgoAbYkAAAAAElFTkSuQmCC",
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
})
|
||||
});
|
||||
|
||||
const completion = await openai.completions.create({
|
||||
model: "llama3.2",
|
||||
prompt: "Say this is a test.",
|
||||
})
|
||||
});
|
||||
|
||||
const listCompletion = await openai.models.list()
|
||||
const listCompletion = await openai.models.list();
|
||||
|
||||
const model = await openai.models.retrieve("llama3.2")
|
||||
const model = await openai.models.retrieve("llama3.2");
|
||||
|
||||
const embedding = await openai.embeddings.create({
|
||||
model: "all-minilm",
|
||||
input: ["why is the sky blue?", "why is the grass green?"],
|
||||
})
|
||||
});
|
||||
```
|
||||
|
||||
### `curl`
|
||||
@@ -306,8 +306,8 @@ curl http://localhost:11434/v1/embeddings \
|
||||
- [x] array of strings
|
||||
- [ ] array of tokens
|
||||
- [ ] array of token arrays
|
||||
- [ ] `encoding format`
|
||||
- [ ] `dimensions`
|
||||
- [x] `encoding format`
|
||||
- [x] `dimensions`
|
||||
- [ ] `user`
|
||||
|
||||
## Models
|
||||
|
||||
35
docs/api/streaming.mdx
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
title: Streaming
|
||||
---
|
||||
|
||||
Certain API endpoints stream responses by default, such as `/api/generate`. These responses are provided in the newline-delimited JSON format (i.e. the `application/x-ndjson` content type). For example:
|
||||
|
||||
```json
|
||||
{"model":"gemma3","created_at":"2025-10-26T17:15:24.097767Z","response":"That","done":false}
|
||||
{"model":"gemma3","created_at":"2025-10-26T17:15:24.109172Z","response":"'","done":false}
|
||||
{"model":"gemma3","created_at":"2025-10-26T17:15:24.121485Z","response":"s","done":false}
|
||||
{"model":"gemma3","created_at":"2025-10-26T17:15:24.132802Z","response":" a","done":false}
|
||||
{"model":"gemma3","created_at":"2025-10-26T17:15:24.143931Z","response":" fantastic","done":false}
|
||||
{"model":"gemma3","created_at":"2025-10-26T17:15:24.155176Z","response":" question","done":false}
|
||||
{"model":"gemma3","created_at":"2025-10-26T17:15:24.166576Z","response":"!","done":true, "done_reason": "stop"}
|
||||
```
|
||||
|
||||
## Disabling streaming
|
||||
|
||||
Streaming can be disabled by providing `{"stream": false}` in the request body for any endpoint that support streaming. This will cause responses to be returned in the `application/json` format instead:
|
||||
|
||||
```json
|
||||
{"model":"gemma3","created_at":"2025-10-26T17:15:24.166576Z","response":"That's a fantastic question!","done":true}
|
||||
```
|
||||
|
||||
## When to use streaming vs non-streaming
|
||||
|
||||
**Streaming (default)**:
|
||||
- Real-time response generation
|
||||
- Lower perceived latency
|
||||
- Better for long generations
|
||||
|
||||
**Non-streaming**:
|
||||
- Simpler to process
|
||||
- Better for short responses, or structured outputs
|
||||
- Easier to handle in some applications
|
||||
36
docs/api/usage.mdx
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
title: Usage
|
||||
---
|
||||
|
||||
Ollama's API responses include metrics that can be used for measuring performance and model usage:
|
||||
|
||||
* `total_duration`: How long the response took to generate
|
||||
* `load_duration`: How long the model took to load
|
||||
* `prompt_eval_count`: How many input tokens were processed
|
||||
* `prompt_eval_duration`: How long it took to evaluate the prompt
|
||||
* `eval_count`: How many output tokens were processes
|
||||
* `eval_duration`: How long it took to generate the output tokens
|
||||
|
||||
All timing values are measured in nanoseconds.
|
||||
|
||||
## Example response
|
||||
|
||||
For endpoints that return usage metrics, the response body will include the usage fields. For example, a non-streaming call to `/api/generate` may return the following response:
|
||||
|
||||
```json
|
||||
{
|
||||
"model": "gemma3",
|
||||
"created_at": "2025-10-17T23:14:07.414671Z",
|
||||
"response": "Hello! How can I help you today?",
|
||||
"done": true,
|
||||
"done_reason": "stop",
|
||||
"total_duration": 174560334,
|
||||
"load_duration": 101397084,
|
||||
"prompt_eval_count": 11,
|
||||
"prompt_eval_duration": 13074791,
|
||||
"eval_count": 18,
|
||||
"eval_duration": 52479709
|
||||
}
|
||||
```
|
||||
|
||||
For endpoints that return **streaming responses**, usage fields are included as part of the final chunk, where `done` is `true`.
|
||||
71
docs/benchmark.mdx
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
title: Benchmark
|
||||
---
|
||||
|
||||
Go benchmark tests that measure end-to-end performance of a running Ollama server. Run these tests to evaluate model inference performance on your hardware and measure the impact of code changes.
|
||||
|
||||
## When to use
|
||||
|
||||
Run these benchmarks when:
|
||||
|
||||
- Making changes to the model inference engine
|
||||
- Modifying model loading/unloading logic
|
||||
- Changing prompt processing or token generation code
|
||||
- Implementing a new model architecture
|
||||
- Testing performance across different hardware setups
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Ollama server running locally with `ollama serve` on `127.0.0.1:11434`
|
||||
|
||||
## Usage and Examples
|
||||
|
||||
<Note>
|
||||
All commands must be run from the root directory of the Ollama project.
|
||||
</Note>
|
||||
|
||||
Basic syntax:
|
||||
|
||||
```bash
|
||||
go test -bench=. ./benchmark/... -m $MODEL_NAME
|
||||
```
|
||||
|
||||
Required flags:
|
||||
|
||||
- `-bench=.`: Run all benchmarks
|
||||
- `-m`: Model name to benchmark
|
||||
|
||||
Optional flags:
|
||||
|
||||
- `-count N`: Number of times to run the benchmark (useful for statistical analysis)
|
||||
- `-timeout T`: Maximum time for the benchmark to run (e.g. "10m" for 10 minutes)
|
||||
|
||||
Common usage patterns:
|
||||
|
||||
Single benchmark run with a model specified:
|
||||
|
||||
```bash
|
||||
go test -bench=. ./benchmark/... -m llama3.3
|
||||
```
|
||||
|
||||
## Output metrics
|
||||
|
||||
The benchmark reports several key metrics:
|
||||
|
||||
- `gen_tok/s`: Generated tokens per second
|
||||
- `prompt_tok/s`: Prompt processing tokens per second
|
||||
- `ttft_ms`: Time to first token in milliseconds
|
||||
- `load_ms`: Model load time in milliseconds
|
||||
- `gen_tokens`: Total tokens generated
|
||||
- `prompt_tokens`: Total prompt tokens processed
|
||||
|
||||
Each benchmark runs two scenarios:
|
||||
|
||||
- Cold start: Model is loaded from disk for each test
|
||||
- Warm start: Model is pre-loaded in memory
|
||||
|
||||
Three prompt lengths are tested for each scenario:
|
||||
|
||||
- Short prompt (100 tokens)
|
||||
- Medium prompt (500 tokens)
|
||||
- Long prompt (1000 tokens)
|
||||
113
docs/capabilities/embeddings.mdx
Normal file
@@ -0,0 +1,113 @@
|
||||
---
|
||||
title: Embeddings
|
||||
description: Generate text embeddings for semantic search, retrieval, and RAG.
|
||||
---
|
||||
|
||||
Embeddings turn text into numeric vectors you can store in a vector database, search with cosine similarity, or use in RAG pipelines. The vector length depends on the model (typically 384–1024 dimensions).
|
||||
|
||||
## Recommended models
|
||||
|
||||
- [embeddinggemma](https://ollama.com/library/embeddinggemma)
|
||||
- [qwen3-embedding](https://ollama.com/library/qwen3-embedding)
|
||||
- [all-minilm](https://ollama.com/library/all-minilm)
|
||||
|
||||
## Generate embeddings
|
||||
|
||||
Use `/api/embed` with a single string.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="cURL">
|
||||
```shell
|
||||
curl -X POST http://localhost:11434/api/embed \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "embeddinggemma",
|
||||
"input": "The quick brown fox jumps over the lazy dog."
|
||||
}'
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Python">
|
||||
```python
|
||||
import ollama
|
||||
|
||||
single = ollama.embed(
|
||||
model='embeddinggemma',
|
||||
input='The quick brown fox jumps over the lazy dog.'
|
||||
)
|
||||
print(len(single['embeddings'][0])) # vector length
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
```javascript
|
||||
import ollama from 'ollama'
|
||||
|
||||
const single = await ollama.embed({
|
||||
model: 'embeddinggemma',
|
||||
input: 'The quick brown fox jumps over the lazy dog.',
|
||||
})
|
||||
console.log(single.embeddings[0].length) // vector length
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Note>
|
||||
The `/api/embed` endpoint returns L2‑normalized (unit‑length) vectors.
|
||||
</Note>
|
||||
|
||||
## Generate a batch of embeddings
|
||||
|
||||
Pass an array of strings to `input`.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="cURL">
|
||||
```shell
|
||||
curl -X POST http://localhost:11434/api/embed \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "embeddinggemma",
|
||||
"input": [
|
||||
"First sentence",
|
||||
"Second sentence",
|
||||
"Third sentence"
|
||||
]
|
||||
}'
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Python">
|
||||
```python
|
||||
import ollama
|
||||
|
||||
batch = ollama.embed(
|
||||
model='embeddinggemma',
|
||||
input=[
|
||||
'The quick brown fox jumps over the lazy dog.',
|
||||
'The five boxing wizards jump quickly.',
|
||||
'Jackdaws love my big sphinx of quartz.',
|
||||
]
|
||||
)
|
||||
print(len(batch['embeddings'])) # number of vectors
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
```javascript
|
||||
import ollama from 'ollama'
|
||||
|
||||
const batch = await ollama.embed({
|
||||
model: 'embeddinggemma',
|
||||
input: [
|
||||
'The quick brown fox jumps over the lazy dog.',
|
||||
'The five boxing wizards jump quickly.',
|
||||
'Jackdaws love my big sphinx of quartz.',
|
||||
],
|
||||
})
|
||||
console.log(batch.embeddings.length) // number of vectors
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Tips
|
||||
|
||||
- Use cosine similarity for most semantic search use cases.
|
||||
- Use the same embedding model for both indexing and querying.
|
||||
|
||||
|
||||
99
docs/capabilities/streaming.mdx
Normal file
@@ -0,0 +1,99 @@
|
||||
---
|
||||
title: Streaming
|
||||
---
|
||||
|
||||
Streaming allows you to render text as it is produced by the model.
|
||||
|
||||
Streaming is enabled by default through the REST API, but disabled by default in the SDKs.
|
||||
|
||||
To enable streaming in the SDKs, set the `stream` parameter to `True`.
|
||||
|
||||
## Key streaming concepts
|
||||
1. Chatting: Stream partial assistant messages. Each chunk includes the `content` so you can render messages as they arrive.
|
||||
1. Thinking: Thinking-capable models emit a `thinking` field alongside regular content in each chunk. Detect this field in streaming chunks to show or hide reasoning traces before the final answer arrives.
|
||||
1. Tool calling: Watch for streamed `tool_calls` in each chunk, execute the requested tool, and append tool outputs back into the conversation.
|
||||
|
||||
## Handling streamed chunks
|
||||
|
||||
|
||||
<Note> It is necessary to accumulate the partial fields in order to maintain the history of the conversation. This is particularly important for tool calling where the thinking, tool call from the model, and the executed tool result must be passed back to the model in the next request. </Note>
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Python">
|
||||
|
||||
```python
|
||||
from ollama import chat
|
||||
|
||||
stream = chat(
|
||||
model='qwen3',
|
||||
messages=[{'role': 'user', 'content': 'What is 17 × 23?'}],
|
||||
stream=True,
|
||||
)
|
||||
|
||||
in_thinking = False
|
||||
content = ''
|
||||
thinking = ''
|
||||
for chunk in stream:
|
||||
if chunk.message.thinking:
|
||||
if not in_thinking:
|
||||
in_thinking = True
|
||||
print('Thinking:\n', end='', flush=True)
|
||||
print(chunk.message.thinking, end='', flush=True)
|
||||
# accumulate the partial thinking
|
||||
thinking += chunk.message.thinking
|
||||
elif chunk.message.content:
|
||||
if in_thinking:
|
||||
in_thinking = False
|
||||
print('\n\nAnswer:\n', end='', flush=True)
|
||||
print(chunk.message.content, end='', flush=True)
|
||||
# accumulate the partial content
|
||||
content += chunk.message.content
|
||||
|
||||
# append the accumulated fields to the messages for the next request
|
||||
new_messages = [{ role: 'assistant', thinking: thinking, content: content }]
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
|
||||
```javascript
|
||||
import ollama from 'ollama'
|
||||
|
||||
async function main() {
|
||||
const stream = await ollama.chat({
|
||||
model: 'qwen3',
|
||||
messages: [{ role: 'user', content: 'What is 17 × 23?' }],
|
||||
stream: true,
|
||||
})
|
||||
|
||||
let inThinking = false
|
||||
let content = ''
|
||||
let thinking = ''
|
||||
|
||||
for await (const chunk of stream) {
|
||||
if (chunk.message.thinking) {
|
||||
if (!inThinking) {
|
||||
inThinking = true
|
||||
process.stdout.write('Thinking:\n')
|
||||
}
|
||||
process.stdout.write(chunk.message.thinking)
|
||||
// accumulate the partial thinking
|
||||
thinking += chunk.message.thinking
|
||||
} else if (chunk.message.content) {
|
||||
if (inThinking) {
|
||||
inThinking = false
|
||||
process.stdout.write('\n\nAnswer:\n')
|
||||
}
|
||||
process.stdout.write(chunk.message.content)
|
||||
// accumulate the partial content
|
||||
content += chunk.message.content
|
||||
}
|
||||
}
|
||||
|
||||
// append the accumulated fields to the messages for the next request
|
||||
new_messages = [{ role: 'assistant', thinking: thinking, content: content }]
|
||||
}
|
||||
|
||||
main().catch(console.error)
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
194
docs/capabilities/structured-outputs.mdx
Normal file
@@ -0,0 +1,194 @@
|
||||
---
|
||||
title: Structured Outputs
|
||||
---
|
||||
|
||||
Structured outputs let you enforce a JSON schema on model responses so you can reliably extract structured data, describe images, or keep every reply consistent.
|
||||
|
||||
## Generating structured JSON
|
||||
|
||||
<Tabs>
|
||||
<Tab title="cURL">
|
||||
```shell
|
||||
curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d '{
|
||||
"model": "gpt-oss",
|
||||
"messages": [{"role": "user", "content": "Tell me about Canada in one line"}],
|
||||
"stream": false,
|
||||
"format": "json"
|
||||
}'
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Python">
|
||||
```python
|
||||
from ollama import chat
|
||||
|
||||
response = chat(
|
||||
model='gpt-oss',
|
||||
messages=[{'role': 'user', 'content': 'Tell me about Canada.'}],
|
||||
format='json'
|
||||
)
|
||||
print(response.message.content)
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
```javascript
|
||||
import ollama from 'ollama'
|
||||
|
||||
const response = await ollama.chat({
|
||||
model: 'gpt-oss',
|
||||
messages: [{ role: 'user', content: 'Tell me about Canada.' }],
|
||||
format: 'json'
|
||||
})
|
||||
console.log(response.message.content)
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Generating structured JSON with a schema
|
||||
|
||||
Provide a JSON schema to the `format` field.
|
||||
|
||||
<Note>
|
||||
It is ideal to also pass the JSON schema as a string in the prompt to ground the model's response.
|
||||
</Note>
|
||||
|
||||
<Tabs>
|
||||
<Tab title="cURL">
|
||||
```shell
|
||||
curl -X POST http://localhost:11434/api/chat -H "Content-Type: application/json" -d '{
|
||||
"model": "gpt-oss",
|
||||
"messages": [{"role": "user", "content": "Tell me about Canada."}],
|
||||
"stream": false,
|
||||
"format": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {"type": "string"},
|
||||
"capital": {"type": "string"},
|
||||
"languages": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"}
|
||||
}
|
||||
},
|
||||
"required": ["name", "capital", "languages"]
|
||||
}
|
||||
}'
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Python">
|
||||
Use Pydantic models and pass `model_json_schema()` to `format`, then validate the response:
|
||||
|
||||
```python
|
||||
from ollama import chat
|
||||
from pydantic import BaseModel
|
||||
|
||||
class Country(BaseModel):
|
||||
name: str
|
||||
capital: str
|
||||
languages: list[str]
|
||||
|
||||
response = chat(
|
||||
model='gpt-oss',
|
||||
messages=[{'role': 'user', 'content': 'Tell me about Canada.'}],
|
||||
format=Country.model_json_schema(),
|
||||
)
|
||||
|
||||
country = Country.model_validate_json(response.message.content)
|
||||
print(country)
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
Serialize a Zod schema with `zodToJsonSchema()` and parse the structured response:
|
||||
|
||||
```javascript
|
||||
import ollama from 'ollama'
|
||||
import { z } from 'zod'
|
||||
import { zodToJsonSchema } from 'zod-to-json-schema'
|
||||
|
||||
const Country = z.object({
|
||||
name: z.string(),
|
||||
capital: z.string(),
|
||||
languages: z.array(z.string()),
|
||||
})
|
||||
|
||||
const response = await ollama.chat({
|
||||
model: 'gpt-oss',
|
||||
messages: [{ role: 'user', content: 'Tell me about Canada.' }],
|
||||
format: zodToJsonSchema(Country),
|
||||
})
|
||||
|
||||
const country = Country.parse(JSON.parse(response.message.content))
|
||||
console.log(country)
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Example: Extract structured data
|
||||
|
||||
Define the objects you want returned and let the model populate the fields:
|
||||
|
||||
```python
|
||||
from ollama import chat
|
||||
from pydantic import BaseModel
|
||||
|
||||
class Pet(BaseModel):
|
||||
name: str
|
||||
animal: str
|
||||
age: int
|
||||
color: str | None
|
||||
favorite_toy: str | None
|
||||
|
||||
class PetList(BaseModel):
|
||||
pets: list[Pet]
|
||||
|
||||
response = chat(
|
||||
model='gpt-oss',
|
||||
messages=[{'role': 'user', 'content': 'I have two cats named Luna and Loki...'}],
|
||||
format=PetList.model_json_schema(),
|
||||
)
|
||||
|
||||
pets = PetList.model_validate_json(response.message.content)
|
||||
print(pets)
|
||||
```
|
||||
|
||||
## Example: Vision with structured outputs
|
||||
|
||||
Vision models accept the same `format` parameter, enabling deterministic descriptions of images:
|
||||
|
||||
```python
|
||||
from ollama import chat
|
||||
from pydantic import BaseModel
|
||||
from typing import Literal, Optional
|
||||
|
||||
class Object(BaseModel):
|
||||
name: str
|
||||
confidence: float
|
||||
attributes: str
|
||||
|
||||
class ImageDescription(BaseModel):
|
||||
summary: str
|
||||
objects: list[Object]
|
||||
scene: str
|
||||
colors: list[str]
|
||||
time_of_day: Literal['Morning', 'Afternoon', 'Evening', 'Night']
|
||||
setting: Literal['Indoor', 'Outdoor', 'Unknown']
|
||||
text_content: Optional[str] = None
|
||||
|
||||
response = chat(
|
||||
model='gemma3',
|
||||
messages=[{
|
||||
'role': 'user',
|
||||
'content': 'Describe this photo and list the objects you detect.',
|
||||
'images': ['path/to/image.jpg'],
|
||||
}],
|
||||
format=ImageDescription.model_json_schema(),
|
||||
options={'temperature': 0},
|
||||
)
|
||||
|
||||
image_description = ImageDescription.model_validate_json(response.message.content)
|
||||
print(image_description)
|
||||
```
|
||||
|
||||
## Tips for reliable structured outputs
|
||||
|
||||
- Define schemas with Pydantic (Python) or Zod (JavaScript) so they can be reused for validation.
|
||||
- Lower the temperature (e.g., set it to `0`) for more deterministic completions.
|
||||
- Structured outputs work through the OpenAI-compatible API via `response_format`
|
||||
153
docs/capabilities/thinking.mdx
Normal file
@@ -0,0 +1,153 @@
|
||||
---
|
||||
title: Thinking
|
||||
---
|
||||
|
||||
Thinking-capable models emit a `thinking` field that separates their reasoning trace from the final answer.
|
||||
|
||||
Use this capability to audit model steps, animate the model *thinking* in a UI, or hide the trace entirely when you only need the final response.
|
||||
|
||||
## Supported models
|
||||
|
||||
- [Qwen 3](https://ollama.com/library/qwen3)
|
||||
- [GPT-OSS](https://ollama.com/library/gpt-oss) *(use `think` levels: `low`, `medium`, `high` — the trace cannot be fully disabled)*
|
||||
- [DeepSeek-v3.1](https://ollama.com/library/deepseek-v3.1)
|
||||
- [DeepSeek R1](https://ollama.com/library/deepseek-r1)
|
||||
- Browse the latest additions under [thinking models](https://ollama.com/search?c=thinking)
|
||||
|
||||
## Enable thinking in API calls
|
||||
|
||||
Set the `think` field on chat or generate requests. Most models accept booleans (`true`/`false`).
|
||||
|
||||
GPT-OSS instead expects one of `low`, `medium`, or `high` to tune the trace length.
|
||||
|
||||
The `message.thinking` (chat endpoint) or `thinking` (generate endpoint) field contains the reasoning trace while `message.content` / `response` holds the final answer.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="cURL">
|
||||
```shell
|
||||
curl http://localhost:11434/api/chat -d '{
|
||||
"model": "qwen3",
|
||||
"messages": [{
|
||||
"role": "user",
|
||||
"content": "How many letter r are in strawberry?"
|
||||
}],
|
||||
"think": true,
|
||||
"stream": false
|
||||
}'
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Python">
|
||||
```python
|
||||
from ollama import chat
|
||||
|
||||
response = chat(
|
||||
model='qwen3',
|
||||
messages=[{'role': 'user', 'content': 'How many letter r are in strawberry?'}],
|
||||
think=True,
|
||||
stream=False,
|
||||
)
|
||||
|
||||
print('Thinking:\n', response.message.thinking)
|
||||
print('Answer:\n', response.message.content)
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
```javascript
|
||||
import ollama from 'ollama'
|
||||
|
||||
const response = await ollama.chat({
|
||||
model: 'deepseek-r1',
|
||||
messages: [{ role: 'user', content: 'How many letter r are in strawberry?' }],
|
||||
think: true,
|
||||
stream: false,
|
||||
})
|
||||
|
||||
console.log('Thinking:\n', response.message.thinking)
|
||||
console.log('Answer:\n', response.message.content)
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
<Note>
|
||||
GPT-OSS requires `think` to be set to `"low"`, `"medium"`, or `"high"`. Passing `true`/`false` is ignored for that model.
|
||||
</Note>
|
||||
|
||||
## Stream the reasoning trace
|
||||
|
||||
Thinking streams interleave reasoning tokens before answer tokens. Detect the first `thinking` chunk to render a "thinking" section, then switch to the final reply once `message.content` arrives.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Python">
|
||||
```python
|
||||
from ollama import chat
|
||||
|
||||
stream = chat(
|
||||
model='qwen3',
|
||||
messages=[{'role': 'user', 'content': 'What is 17 × 23?'}],
|
||||
think=True,
|
||||
stream=True,
|
||||
)
|
||||
|
||||
in_thinking = False
|
||||
|
||||
for chunk in stream:
|
||||
if chunk.message.thinking and not in_thinking:
|
||||
in_thinking = True
|
||||
print('Thinking:\n', end='')
|
||||
|
||||
if chunk.message.thinking:
|
||||
print(chunk.message.thinking, end='')
|
||||
elif chunk.message.content:
|
||||
if in_thinking:
|
||||
print('\n\nAnswer:\n', end='')
|
||||
in_thinking = False
|
||||
print(chunk.message.content, end='')
|
||||
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
```javascript
|
||||
import ollama from 'ollama'
|
||||
|
||||
async function main() {
|
||||
const stream = await ollama.chat({
|
||||
model: 'qwen3',
|
||||
messages: [{ role: 'user', content: 'What is 17 × 23?' }],
|
||||
think: true,
|
||||
stream: true,
|
||||
})
|
||||
|
||||
let inThinking = false
|
||||
|
||||
for await (const chunk of stream) {
|
||||
if (chunk.message.thinking && !inThinking) {
|
||||
inThinking = true
|
||||
process.stdout.write('Thinking:\n')
|
||||
}
|
||||
|
||||
if (chunk.message.thinking) {
|
||||
process.stdout.write(chunk.message.thinking)
|
||||
} else if (chunk.message.content) {
|
||||
if (inThinking) {
|
||||
process.stdout.write('\n\nAnswer:\n')
|
||||
inThinking = false
|
||||
}
|
||||
process.stdout.write(chunk.message.content)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
main()
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## CLI quick reference
|
||||
|
||||
- Enable thinking for a single run: `ollama run deepseek-r1 --think "Where should I visit in Lisbon?"`
|
||||
- Disable thinking: `ollama run deepseek-r1 --think=false "Summarize this article"`
|
||||
- Hide the trace while still using a thinking model: `ollama run deepseek-r1 --hidethinking "Is 9.9 bigger or 9.11?"`
|
||||
- Inside interactive sessions, toggle with `/set think` or `/set nothink`.
|
||||
- GPT-OSS only accepts levels: `ollama run gpt-oss --think=low "Draft a headline"` (replace `low` with `medium` or `high` as needed).
|
||||
|
||||
<Note>Thinking is enabled by default in the CLI and API for supported models.</Note>
|
||||
777
docs/capabilities/tool-calling.mdx
Normal file
@@ -0,0 +1,777 @@
|
||||
---
|
||||
title: Tool calling
|
||||
---
|
||||
|
||||
Ollama supports tool calling (also known as function calling) which allows a model to invoke tools and incorporate their results into its replies.
|
||||
|
||||
## Calling a single tool
|
||||
Invoke a single tool and include its response in a follow-up request.
|
||||
|
||||
Also known as "single-shot" tool calling.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="cURL">
|
||||
|
||||
```shell
|
||||
curl -s http://localhost:11434/api/chat -H "Content-Type: application/json" -d '{
|
||||
"model": "qwen3",
|
||||
"messages": [{"role": "user", "content": "What's the temperature in New York?"}],
|
||||
"stream": false,
|
||||
"tools": [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "get_temperature",
|
||||
"description": "Get the current temperature for a city",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"required": ["city"],
|
||||
"properties": {
|
||||
"city": {"type": "string", "description": "The name of the city"}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}'
|
||||
```
|
||||
|
||||
**Generate a response with a single tool result**
|
||||
```shell
|
||||
curl -s http://localhost:11434/api/chat -H "Content-Type: application/json" -d '{
|
||||
"model": "qwen3",
|
||||
"messages": [
|
||||
{"role": "user", "content": "What's the temperature in New York?"},
|
||||
{
|
||||
"role": "assistant",
|
||||
"tool_calls": [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"index": 0,
|
||||
"name": "get_temperature",
|
||||
"arguments": {"city": "New York"}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{"role": "tool", "tool_name": "get_temperature", "content": "22°C"}
|
||||
],
|
||||
"stream": false
|
||||
}'
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Python">
|
||||
Install the Ollama Python SDK:
|
||||
```bash
|
||||
# with pip
|
||||
pip install ollama -U
|
||||
|
||||
# with uv
|
||||
uv add ollama
|
||||
```
|
||||
|
||||
```python
|
||||
from ollama import chat
|
||||
|
||||
def get_temperature(city: str) -> str:
|
||||
"""Get the current temperature for a city
|
||||
|
||||
Args:
|
||||
city: The name of the city
|
||||
|
||||
Returns:
|
||||
The current temperature for the city
|
||||
"""
|
||||
temperatures = {
|
||||
"New York": "22°C",
|
||||
"London": "15°C",
|
||||
"Tokyo": "18°C",
|
||||
}
|
||||
return temperatures.get(city, "Unknown")
|
||||
|
||||
messages = [{"role": "user", "content": "What's the temperature in New York?"}]
|
||||
|
||||
# pass functions directly as tools in the tools list or as a JSON schema
|
||||
response = chat(model="qwen3", messages=messages, tools=[get_temperature], think=True)
|
||||
|
||||
messages.append(response.message)
|
||||
if response.message.tool_calls:
|
||||
# only recommended for models which only return a single tool call
|
||||
call = response.message.tool_calls[0]
|
||||
result = get_temperature(**call.function.arguments)
|
||||
# add the tool result to the messages
|
||||
messages.append({"role": "tool", "tool_name": call.function.name, "content": str(result)})
|
||||
|
||||
final_response = chat(model="qwen3", messages=messages, tools=[get_temperature], think=True)
|
||||
print(final_response.message.content)
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
Install the Ollama JavaScript library:
|
||||
```bash
|
||||
# with npm
|
||||
npm i ollama
|
||||
|
||||
# with bun
|
||||
bun i ollama
|
||||
```
|
||||
|
||||
```typescript
|
||||
import ollama from 'ollama'
|
||||
|
||||
function getTemperature(city: string): string {
|
||||
const temperatures: Record<string, string> = {
|
||||
'New York': '22°C',
|
||||
'London': '15°C',
|
||||
'Tokyo': '18°C',
|
||||
}
|
||||
return temperatures[city] ?? 'Unknown'
|
||||
}
|
||||
|
||||
const tools = [
|
||||
{
|
||||
type: 'function',
|
||||
function: {
|
||||
name: 'get_temperature',
|
||||
description: 'Get the current temperature for a city',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
required: ['city'],
|
||||
properties: {
|
||||
city: { type: 'string', description: 'The name of the city' },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
]
|
||||
|
||||
const messages = [{ role: 'user', content: "What's the temperature in New York?" }]
|
||||
|
||||
const response = await ollama.chat({
|
||||
model: 'qwen3',
|
||||
messages,
|
||||
tools,
|
||||
think: true,
|
||||
})
|
||||
|
||||
messages.push(response.message)
|
||||
if (response.message.tool_calls?.length) {
|
||||
// only recommended for models which only return a single tool call
|
||||
const call = response.message.tool_calls[0]
|
||||
const args = call.function.arguments as { city: string }
|
||||
const result = getTemperature(args.city)
|
||||
// add the tool result to the messages
|
||||
messages.push({ role: 'tool', tool_name: call.function.name, content: result })
|
||||
|
||||
// generate the final response
|
||||
const finalResponse = await ollama.chat({ model: 'qwen3', messages, tools, think: true })
|
||||
console.log(finalResponse.message.content)
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Parallel tool calling
|
||||
|
||||
<Tabs>
|
||||
<Tab title="cURL">
|
||||
Request multiple tool calls in parallel, then send all tool responses back to the model.
|
||||
|
||||
```shell
|
||||
curl -s http://localhost:11434/api/chat -H "Content-Type: application/json" -d '{
|
||||
"model": "qwen3",
|
||||
"messages": [{"role": "user", "content": "What are the current weather conditions and temperature in New York and London?"}],
|
||||
"stream": false,
|
||||
"tools": [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "get_temperature",
|
||||
"description": "Get the current temperature for a city",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"required": ["city"],
|
||||
"properties": {
|
||||
"city": {"type": "string", "description": "The name of the city"}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "get_conditions",
|
||||
"description": "Get the current weather conditions for a city",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"required": ["city"],
|
||||
"properties": {
|
||||
"city": {"type": "string", "description": "The name of the city"}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}'
|
||||
```
|
||||
|
||||
**Generate a response with multiple tool results**
|
||||
```shell
|
||||
curl -s http://localhost:11434/api/chat -H "Content-Type: application/json" -d '{
|
||||
"model": "qwen3",
|
||||
"messages": [
|
||||
{"role": "user", "content": "What are the current weather conditions and temperature in New York and London?"},
|
||||
{
|
||||
"role": "assistant",
|
||||
"tool_calls": [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"index": 0,
|
||||
"name": "get_temperature",
|
||||
"arguments": {"city": "New York"}
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"index": 1,
|
||||
"name": "get_conditions",
|
||||
"arguments": {"city": "New York"}
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"index": 2,
|
||||
"name": "get_temperature",
|
||||
"arguments": {"city": "London"}
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"index": 3,
|
||||
"name": "get_conditions",
|
||||
"arguments": {"city": "London"}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{"role": "tool", "tool_name": "get_temperature", "content": "22°C"},
|
||||
{"role": "tool", "tool_name": "get_conditions", "content": "Partly cloudy"},
|
||||
{"role": "tool", "tool_name": "get_temperature", "content": "15°C"},
|
||||
{"role": "tool", "tool_name": "get_conditions", "content": "Rainy"}
|
||||
],
|
||||
"stream": false
|
||||
}'
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Python">
|
||||
```python
|
||||
from ollama import chat
|
||||
|
||||
def get_temperature(city: str) -> str:
|
||||
"""Get the current temperature for a city
|
||||
|
||||
Args:
|
||||
city: The name of the city
|
||||
|
||||
Returns:
|
||||
The current temperature for the city
|
||||
"""
|
||||
temperatures = {
|
||||
"New York": "22°C",
|
||||
"London": "15°C",
|
||||
"Tokyo": "18°C"
|
||||
}
|
||||
return temperatures.get(city, "Unknown")
|
||||
|
||||
def get_conditions(city: str) -> str:
|
||||
"""Get the current weather conditions for a city
|
||||
|
||||
Args:
|
||||
city: The name of the city
|
||||
|
||||
Returns:
|
||||
The current weather conditions for the city
|
||||
"""
|
||||
conditions = {
|
||||
"New York": "Partly cloudy",
|
||||
"London": "Rainy",
|
||||
"Tokyo": "Sunny"
|
||||
}
|
||||
return conditions.get(city, "Unknown")
|
||||
|
||||
|
||||
messages = [{'role': 'user', 'content': 'What are the current weather conditions and temperature in New York and London?'}]
|
||||
|
||||
# The python client automatically parses functions as a tool schema so we can pass them directly
|
||||
# Schemas can be passed directly in the tools list as well
|
||||
response = chat(model='qwen3', messages=messages, tools=[get_temperature, get_conditions], think=True)
|
||||
|
||||
# add the assistant message to the messages
|
||||
messages.append(response.message)
|
||||
if response.message.tool_calls:
|
||||
# process each tool call
|
||||
for call in response.message.tool_calls:
|
||||
# execute the appropriate tool
|
||||
if call.function.name == 'get_temperature':
|
||||
result = get_temperature(**call.function.arguments)
|
||||
elif call.function.name == 'get_conditions':
|
||||
result = get_conditions(**call.function.arguments)
|
||||
else:
|
||||
result = 'Unknown tool'
|
||||
# add the tool result to the messages
|
||||
messages.append({'role': 'tool', 'tool_name': call.function.name, 'content': str(result)})
|
||||
|
||||
# generate the final response
|
||||
final_response = chat(model='qwen3', messages=messages, tools=[get_temperature, get_conditions], think=True)
|
||||
print(final_response.message.content)
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
```typescript
|
||||
import ollama from 'ollama'
|
||||
|
||||
function getTemperature(city: string): string {
|
||||
const temperatures: { [key: string]: string } = {
|
||||
"New York": "22°C",
|
||||
"London": "15°C",
|
||||
"Tokyo": "18°C"
|
||||
}
|
||||
return temperatures[city] || "Unknown"
|
||||
}
|
||||
|
||||
function getConditions(city: string): string {
|
||||
const conditions: { [key: string]: string } = {
|
||||
"New York": "Partly cloudy",
|
||||
"London": "Rainy",
|
||||
"Tokyo": "Sunny"
|
||||
}
|
||||
return conditions[city] || "Unknown"
|
||||
}
|
||||
|
||||
const tools = [
|
||||
{
|
||||
type: 'function',
|
||||
function: {
|
||||
name: 'get_temperature',
|
||||
description: 'Get the current temperature for a city',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
required: ['city'],
|
||||
properties: {
|
||||
city: { type: 'string', description: 'The name of the city' },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
type: 'function',
|
||||
function: {
|
||||
name: 'get_conditions',
|
||||
description: 'Get the current weather conditions for a city',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
required: ['city'],
|
||||
properties: {
|
||||
city: { type: 'string', description: 'The name of the city' },
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
]
|
||||
|
||||
const messages = [{ role: 'user', content: 'What are the current weather conditions and temperature in New York and London?' }]
|
||||
|
||||
const response = await ollama.chat({
|
||||
model: 'qwen3',
|
||||
messages,
|
||||
tools,
|
||||
think: true
|
||||
})
|
||||
|
||||
// add the assistant message to the messages
|
||||
messages.push(response.message)
|
||||
if (response.message.tool_calls) {
|
||||
// process each tool call
|
||||
for (const call of response.message.tool_calls) {
|
||||
// execute the appropriate tool
|
||||
let result: string
|
||||
if (call.function.name === 'get_temperature') {
|
||||
const args = call.function.arguments as { city: string }
|
||||
result = getTemperature(args.city)
|
||||
} else if (call.function.name === 'get_conditions') {
|
||||
const args = call.function.arguments as { city: string }
|
||||
result = getConditions(args.city)
|
||||
} else {
|
||||
result = 'Unknown tool'
|
||||
}
|
||||
// add the tool result to the messages
|
||||
messages.push({ role: 'tool', tool_name: call.function.name, content: result })
|
||||
}
|
||||
|
||||
// generate the final response
|
||||
const finalResponse = await ollama.chat({ model: 'qwen3', messages, tools, think: true })
|
||||
console.log(finalResponse.message.content)
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
|
||||
## Multi-turn tool calling (Agent loop)
|
||||
|
||||
An agent loop allows the model to decide when to invoke tools and incorporate their results into its replies.
|
||||
|
||||
It also might help to tell the model that it is in a loop and can make multiple tool calls.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Python">
|
||||
```python
|
||||
from ollama import chat, ChatResponse
|
||||
|
||||
|
||||
def add(a: int, b: int) -> int:
|
||||
"""Add two numbers"""
|
||||
"""
|
||||
Args:
|
||||
a: The first number
|
||||
b: The second number
|
||||
|
||||
Returns:
|
||||
The sum of the two numbers
|
||||
"""
|
||||
return a + b
|
||||
|
||||
|
||||
def multiply(a: int, b: int) -> int:
|
||||
"""Multiply two numbers"""
|
||||
"""
|
||||
Args:
|
||||
a: The first number
|
||||
b: The second number
|
||||
|
||||
Returns:
|
||||
The product of the two numbers
|
||||
"""
|
||||
return a * b
|
||||
|
||||
|
||||
available_functions = {
|
||||
'add': add,
|
||||
'multiply': multiply,
|
||||
}
|
||||
|
||||
messages = [{'role': 'user', 'content': 'What is (11434+12341)*412?'}]
|
||||
while True:
|
||||
response: ChatResponse = chat(
|
||||
model='qwen3',
|
||||
messages=messages,
|
||||
tools=[add, multiply],
|
||||
think=True,
|
||||
)
|
||||
messages.append(response.message)
|
||||
print("Thinking: ", response.message.thinking)
|
||||
print("Content: ", response.message.content)
|
||||
if response.message.tool_calls:
|
||||
for tc in response.message.tool_calls:
|
||||
if tc.function.name in available_functions:
|
||||
print(f"Calling {tc.function.name} with arguments {tc.function.arguments}")
|
||||
result = available_functions[tc.function.name](**tc.function.arguments)
|
||||
print(f"Result: {result}")
|
||||
# add the tool result to the messages
|
||||
messages.append({'role': 'tool', 'tool_name': tc.function.name, 'content': str(result)})
|
||||
else:
|
||||
# end the loop when there are no more tool calls
|
||||
break
|
||||
# continue the loop with the updated messages
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
```typescript
|
||||
import ollama from 'ollama'
|
||||
|
||||
type ToolName = 'add' | 'multiply'
|
||||
|
||||
function add(a: number, b: number): number {
|
||||
return a + b
|
||||
}
|
||||
|
||||
function multiply(a: number, b: number): number {
|
||||
return a * b
|
||||
}
|
||||
|
||||
const availableFunctions: Record<ToolName, (a: number, b: number) => number> = {
|
||||
add,
|
||||
multiply,
|
||||
}
|
||||
|
||||
const tools = [
|
||||
{
|
||||
type: 'function',
|
||||
function: {
|
||||
name: 'add',
|
||||
description: 'Add two numbers',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
required: ['a', 'b'],
|
||||
properties: {
|
||||
a: { type: 'integer', description: 'The first number' },
|
||||
b: { type: 'integer', description: 'The second number' },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
type: 'function',
|
||||
function: {
|
||||
name: 'multiply',
|
||||
description: 'Multiply two numbers',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
required: ['a', 'b'],
|
||||
properties: {
|
||||
a: { type: 'integer', description: 'The first number' },
|
||||
b: { type: 'integer', description: 'The second number' },
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
]
|
||||
|
||||
async function agentLoop() {
|
||||
const messages = [{ role: 'user', content: 'What is (11434+12341)*412?' }]
|
||||
|
||||
while (true) {
|
||||
const response = await ollama.chat({
|
||||
model: 'qwen3',
|
||||
messages,
|
||||
tools,
|
||||
think: true,
|
||||
})
|
||||
|
||||
messages.push(response.message)
|
||||
console.log('Thinking:', response.message.thinking)
|
||||
console.log('Content:', response.message.content)
|
||||
|
||||
const toolCalls = response.message.tool_calls ?? []
|
||||
if (toolCalls.length) {
|
||||
for (const call of toolCalls) {
|
||||
const fn = availableFunctions[call.function.name as ToolName]
|
||||
if (!fn) {
|
||||
continue
|
||||
}
|
||||
|
||||
const args = call.function.arguments as { a: number; b: number }
|
||||
console.log(`Calling ${call.function.name} with arguments`, args)
|
||||
const result = fn(args.a, args.b)
|
||||
console.log(`Result: ${result}`)
|
||||
messages.push({ role: 'tool', tool_name: call.function.name, content: String(result) })
|
||||
}
|
||||
} else {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
agentLoop().catch(console.error)
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
|
||||
## Tool calling with streaming
|
||||
|
||||
When streaming, gather every chunk of `thinking`, `content`, and `tool_calls`, then return those fields together with any tool results in the follow-up request.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Python">
|
||||
```python
|
||||
from ollama import chat
|
||||
|
||||
|
||||
def get_temperature(city: str) -> str:
|
||||
"""Get the current temperature for a city
|
||||
|
||||
Args:
|
||||
city: The name of the city
|
||||
|
||||
Returns:
|
||||
The current temperature for the city
|
||||
"""
|
||||
temperatures = {
|
||||
'New York': '22°C',
|
||||
'London': '15°C',
|
||||
}
|
||||
return temperatures.get(city, 'Unknown')
|
||||
|
||||
|
||||
messages = [{'role': 'user', 'content': "What's the temperature in New York?"}]
|
||||
|
||||
while True:
|
||||
stream = chat(
|
||||
model='qwen3',
|
||||
messages=messages,
|
||||
tools=[get_temperature],
|
||||
stream=True,
|
||||
think=True,
|
||||
)
|
||||
|
||||
thinking = ''
|
||||
content = ''
|
||||
tool_calls = []
|
||||
|
||||
done_thinking = False
|
||||
# accumulate the partial fields
|
||||
for chunk in stream:
|
||||
if chunk.message.thinking:
|
||||
thinking += chunk.message.thinking
|
||||
print(chunk.message.thinking, end='', flush=True)
|
||||
if chunk.message.content:
|
||||
if not done_thinking:
|
||||
done_thinking = True
|
||||
print('\n')
|
||||
content += chunk.message.content
|
||||
print(chunk.message.content, end='', flush=True)
|
||||
if chunk.message.tool_calls:
|
||||
tool_calls.extend(chunk.message.tool_calls)
|
||||
print(chunk.message.tool_calls)
|
||||
|
||||
# append accumulated fields to the messages
|
||||
if thinking or content or tool_calls:
|
||||
messages.append({'role': 'assistant', 'thinking': thinking, 'content': content, 'tool_calls': tool_calls})
|
||||
|
||||
if not tool_calls:
|
||||
break
|
||||
|
||||
for call in tool_calls:
|
||||
if call.function.name == 'get_temperature':
|
||||
result = get_temperature(**call.function.arguments)
|
||||
else:
|
||||
result = 'Unknown tool'
|
||||
messages.append({'role': 'tool', 'tool_name': call.function.name, 'content': result})
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
```typescript
|
||||
import ollama from 'ollama'
|
||||
|
||||
function getTemperature(city: string): string {
|
||||
const temperatures: Record<string, string> = {
|
||||
'New York': '22°C',
|
||||
'London': '15°C',
|
||||
}
|
||||
return temperatures[city] ?? 'Unknown'
|
||||
}
|
||||
|
||||
const getTemperatureTool = {
|
||||
type: 'function',
|
||||
function: {
|
||||
name: 'get_temperature',
|
||||
description: 'Get the current temperature for a city',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
required: ['city'],
|
||||
properties: {
|
||||
city: { type: 'string', description: 'The name of the city' },
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
async function agentLoop() {
|
||||
const messages = [{ role: 'user', content: "What's the temperature in New York?" }]
|
||||
|
||||
while (true) {
|
||||
const stream = await ollama.chat({
|
||||
model: 'qwen3',
|
||||
messages,
|
||||
tools: [getTemperatureTool],
|
||||
stream: true,
|
||||
think: true,
|
||||
})
|
||||
|
||||
let thinking = ''
|
||||
let content = ''
|
||||
const toolCalls: any[] = []
|
||||
let doneThinking = false
|
||||
|
||||
for await (const chunk of stream) {
|
||||
if (chunk.message.thinking) {
|
||||
thinking += chunk.message.thinking
|
||||
process.stdout.write(chunk.message.thinking)
|
||||
}
|
||||
if (chunk.message.content) {
|
||||
if (!doneThinking) {
|
||||
doneThinking = true
|
||||
process.stdout.write('\n')
|
||||
}
|
||||
content += chunk.message.content
|
||||
process.stdout.write(chunk.message.content)
|
||||
}
|
||||
if (chunk.message.tool_calls?.length) {
|
||||
toolCalls.push(...chunk.message.tool_calls)
|
||||
console.log(chunk.message.tool_calls)
|
||||
}
|
||||
}
|
||||
|
||||
if (thinking || content || toolCalls.length) {
|
||||
messages.push({ role: 'assistant', thinking, content, tool_calls: toolCalls } as any)
|
||||
}
|
||||
|
||||
if (!toolCalls.length) {
|
||||
break
|
||||
}
|
||||
|
||||
for (const call of toolCalls) {
|
||||
if (call.function.name === 'get_temperature') {
|
||||
const args = call.function.arguments as { city: string }
|
||||
const result = getTemperature(args.city)
|
||||
messages.push({ role: 'tool', tool_name: call.function.name, content: result } )
|
||||
} else {
|
||||
messages.push({ role: 'tool', tool_name: call.function.name, content: 'Unknown tool' } )
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
agentLoop().catch(console.error)
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
This loop streams the assistant response, accumulates partial fields, passes them back together, and appends the tool results so the model can complete its answer.
|
||||
|
||||
|
||||
## Using functions as tools with Ollama Python SDK
|
||||
The Python SDK automatically parses functions as a tool schema so we can pass them directly.
|
||||
Schemas can still be passed if needed.
|
||||
|
||||
```python
|
||||
from ollama import chat
|
||||
|
||||
def get_temperature(city: str) -> str:
|
||||
"""Get the current temperature for a city
|
||||
|
||||
Args:
|
||||
city: The name of the city
|
||||
|
||||
Returns:
|
||||
The current temperature for the city
|
||||
"""
|
||||
temperatures = {
|
||||
'New York': '22°C',
|
||||
'London': '15°C',
|
||||
}
|
||||
return temperatures.get(city, 'Unknown')
|
||||
|
||||
available_functions = {
|
||||
'get_temperature': get_temperature,
|
||||
}
|
||||
# directly pass the function as part of the tools list
|
||||
response = chat(model='qwen3', messages=messages, tools=available_functions.values(), think=True)
|
||||
```
|
||||
85
docs/capabilities/vision.mdx
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
title: Vision
|
||||
---
|
||||
|
||||
Vision models accept images alongside text so the model can describe, classify, and answer questions about what it sees.
|
||||
|
||||
## Quick start
|
||||
|
||||
```shell
|
||||
ollama run gemma3 ./image.png whats in this image?
|
||||
```
|
||||
|
||||
|
||||
## Usage with Ollama's API
|
||||
Provide an `images` array. SDKs accept file paths, URLs or raw bytes while the REST API expects base64-encoded image data.
|
||||
|
||||
|
||||
<Tabs>
|
||||
<Tab title="cURL">
|
||||
```shell
|
||||
# 1. Download a sample image
|
||||
curl -L -o test.jpg "https://upload.wikimedia.org/wikipedia/commons/3/3a/Cat03.jpg"
|
||||
|
||||
# 2. Encode the image
|
||||
IMG=$(base64 < test.jpg | tr -d '\n')
|
||||
|
||||
# 3. Send it to Ollama
|
||||
curl -X POST http://localhost:11434/api/chat \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "gemma3",
|
||||
"messages": [{
|
||||
"role": "user",
|
||||
"content": "What is in this image?",
|
||||
"images": ["'"$IMG"'"]
|
||||
}],
|
||||
"stream": false
|
||||
}'
|
||||
"
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Python">
|
||||
```python
|
||||
from ollama import chat
|
||||
# from pathlib import Path
|
||||
|
||||
# Pass in the path to the image
|
||||
path = input('Please enter the path to the image: ')
|
||||
|
||||
# You can also pass in base64 encoded image data
|
||||
# img = base64.b64encode(Path(path).read_bytes()).decode()
|
||||
# or the raw bytes
|
||||
# img = Path(path).read_bytes()
|
||||
|
||||
response = chat(
|
||||
model='gemma3',
|
||||
messages=[
|
||||
{
|
||||
'role': 'user',
|
||||
'content': 'What is in this image? Be concise.',
|
||||
'images': [path],
|
||||
}
|
||||
],
|
||||
)
|
||||
|
||||
print(response.message.content)
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
```javascript
|
||||
import ollama from 'ollama'
|
||||
|
||||
const imagePath = '/absolute/path/to/image.jpg'
|
||||
const response = await ollama.chat({
|
||||
model: 'gemma3',
|
||||
messages: [
|
||||
{ role: 'user', content: 'What is in this image?', images: [imagePath] }
|
||||
],
|
||||
stream: false,
|
||||
})
|
||||
|
||||
console.log(response.message.content)
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
360
docs/capabilities/web-search.mdx
Normal file
@@ -0,0 +1,360 @@
|
||||
---
|
||||
title: Web search
|
||||
---
|
||||
|
||||
Ollama's web search API can be used to augment models with the latest information to reduce hallucinations and improve accuracy.
|
||||
|
||||
Web search is provided as a REST API with deeper tool integrations in the Python and JavaScript libraries. This also enables models like OpenAI’s gpt-oss models to conduct long-running research tasks.
|
||||
|
||||
## Authentication
|
||||
|
||||
For access to Ollama's web search API, create an [API key](https://ollama.com/settings/keys). A free Ollama account is required.
|
||||
|
||||
## Web search API
|
||||
|
||||
Performs a web search for a single query and returns relevant results.
|
||||
|
||||
### Request
|
||||
|
||||
`POST https://ollama.com/api/web_search`
|
||||
|
||||
- `query` (string, required): the search query string
|
||||
- `max_results` (integer, optional): maximum results to return (default 5, max 10)
|
||||
|
||||
### Response
|
||||
|
||||
Returns an object containing:
|
||||
|
||||
- `results` (array): array of search result objects, each containing:
|
||||
- `title` (string): the title of the web page
|
||||
- `url` (string): the URL of the web page
|
||||
- `content` (string): relevant content snippet from the web page
|
||||
|
||||
### Examples
|
||||
|
||||
<Note>
|
||||
Ensure OLLAMA_API_KEY is set or it must be passed in the Authorization header.
|
||||
</Note>
|
||||
|
||||
#### cURL Request
|
||||
|
||||
```bash
|
||||
curl https://ollama.com/api/web_search \
|
||||
--header "Authorization: Bearer $OLLAMA_API_KEY" \
|
||||
-d '{
|
||||
"query":"what is ollama?"
|
||||
}'
|
||||
```
|
||||
|
||||
**Response**
|
||||
|
||||
```json
|
||||
{
|
||||
"results": [
|
||||
{
|
||||
"title": "Ollama",
|
||||
"url": "https://ollama.com/",
|
||||
"content": "Cloud models are now available..."
|
||||
},
|
||||
{
|
||||
"title": "What is Ollama? Introduction to the AI model management tool",
|
||||
"url": "https://www.hostinger.com/tutorials/what-is-ollama",
|
||||
"content": "Ariffud M. 6min Read..."
|
||||
},
|
||||
{
|
||||
"title": "Ollama Explained: Transforming AI Accessibility and Language ...",
|
||||
"url": "https://www.geeksforgeeks.org/artificial-intelligence/ollama-explained-transforming-ai-accessibility-and-language-processing/",
|
||||
"content": "Data Science Data Science Projects Data Analysis..."
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Python library
|
||||
|
||||
```python
|
||||
import ollama
|
||||
response = ollama.web_search("What is Ollama?")
|
||||
print(response)
|
||||
```
|
||||
|
||||
**Example output**
|
||||
|
||||
```python
|
||||
|
||||
results = [
|
||||
{
|
||||
"title": "Ollama",
|
||||
"url": "https://ollama.com/",
|
||||
"content": "Cloud models are now available in Ollama..."
|
||||
},
|
||||
{
|
||||
"title": "What is Ollama? Features, Pricing, and Use Cases - Walturn",
|
||||
"url": "https://www.walturn.com/insights/what-is-ollama-features-pricing-and-use-cases",
|
||||
"content": "Our services..."
|
||||
},
|
||||
{
|
||||
"title": "Complete Ollama Guide: Installation, Usage & Code Examples",
|
||||
"url": "https://collabnix.com/complete-ollama-guide-installation-usage-code-examples",
|
||||
"content": "Join our Discord Server..."
|
||||
}
|
||||
]
|
||||
|
||||
```
|
||||
|
||||
More Ollama [Python example](https://github.com/ollama/ollama-python/blob/main/examples/web-search.py)
|
||||
|
||||
#### JavaScript Library
|
||||
|
||||
```tsx
|
||||
import { Ollama } from "ollama";
|
||||
|
||||
const client = new Ollama();
|
||||
const results = await client.webSearch({ query: "what is ollama?" });
|
||||
console.log(JSON.stringify(results, null, 2));
|
||||
```
|
||||
|
||||
**Example output**
|
||||
|
||||
```json
|
||||
{
|
||||
"results": [
|
||||
{
|
||||
"title": "Ollama",
|
||||
"url": "https://ollama.com/",
|
||||
"content": "Cloud models are now available..."
|
||||
},
|
||||
{
|
||||
"title": "What is Ollama? Introduction to the AI model management tool",
|
||||
"url": "https://www.hostinger.com/tutorials/what-is-ollama",
|
||||
"content": "Ollama is an open-source tool..."
|
||||
},
|
||||
{
|
||||
"title": "Ollama Explained: Transforming AI Accessibility and Language Processing",
|
||||
"url": "https://www.geeksforgeeks.org/artificial-intelligence/ollama-explained-transforming-ai-accessibility-and-language-processing/",
|
||||
"content": "Ollama is a groundbreaking..."
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
More Ollama [JavaScript example](https://github.com/ollama/ollama-js/blob/main/examples/websearch/websearch-tools.ts)
|
||||
|
||||
## Web fetch API
|
||||
|
||||
Fetches a single web page by URL and returns its content.
|
||||
|
||||
### Request
|
||||
|
||||
`POST https://ollama.com/api/web_fetch`
|
||||
|
||||
- `url` (string, required): the URL to fetch
|
||||
|
||||
### Response
|
||||
|
||||
Returns an object containing:
|
||||
|
||||
- `title` (string): the title of the web page
|
||||
- `content` (string): the main content of the web page
|
||||
- `links` (array): array of links found on the page
|
||||
|
||||
### Examples
|
||||
|
||||
#### cURL Request
|
||||
|
||||
```python
|
||||
curl --request POST \
|
||||
--url https://ollama.com/api/web_fetch \
|
||||
--header "Authorization: Bearer $OLLAMA_API_KEY" \
|
||||
--header 'Content-Type: application/json' \
|
||||
--data '{
|
||||
"url": "ollama.com"
|
||||
}'
|
||||
```
|
||||
|
||||
**Response**
|
||||
|
||||
```json
|
||||
{
|
||||
"title": "Ollama",
|
||||
"content": "[Cloud models](https://ollama.com/blog/cloud-models) are now available in Ollama...",
|
||||
"links": [
|
||||
"http://ollama.com/",
|
||||
"http://ollama.com/models",
|
||||
"https://github.com/ollama/ollama"
|
||||
]
|
||||
|
||||
```
|
||||
|
||||
#### Python SDK
|
||||
|
||||
```python
|
||||
from ollama import web_fetch
|
||||
|
||||
result = web_fetch('https://ollama.com')
|
||||
print(result)
|
||||
```
|
||||
|
||||
**Result**
|
||||
|
||||
```python
|
||||
WebFetchResponse(
|
||||
title='Ollama',
|
||||
content='[Cloud models](https://ollama.com/blog/cloud-models) are now available in Ollama\n\n**Chat & build
|
||||
with open models**\n\n[Download](https://ollama.com/download) [Explore
|
||||
models](https://ollama.com/models)\n\nAvailable for macOS, Windows, and Linux',
|
||||
links=['https://ollama.com/', 'https://ollama.com/models', 'https://github.com/ollama/ollama']
|
||||
)
|
||||
```
|
||||
|
||||
#### JavaScript SDK
|
||||
|
||||
```tsx
|
||||
import { Ollama } from "ollama";
|
||||
|
||||
const client = new Ollama();
|
||||
const fetchResult = await client.webFetch({ url: "https://ollama.com" });
|
||||
console.log(JSON.stringify(fetchResult, null, 2));
|
||||
```
|
||||
|
||||
**Result**
|
||||
|
||||
```json
|
||||
{
|
||||
"title": "Ollama",
|
||||
"content": "[Cloud models](https://ollama.com/blog/cloud-models) are now available in Ollama...",
|
||||
"links": [
|
||||
"https://ollama.com/",
|
||||
"https://ollama.com/models",
|
||||
"https://github.com/ollama/ollama"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Building a search agent
|
||||
|
||||
Use Ollama’s web search API as a tool to build a mini search agent.
|
||||
|
||||
This example uses Alibaba’s Qwen 3 model with 4B parameters.
|
||||
|
||||
```bash
|
||||
ollama pull qwen3:4b
|
||||
```
|
||||
|
||||
```python
|
||||
from ollama import chat, web_fetch, web_search
|
||||
|
||||
available_tools = {'web_search': web_search, 'web_fetch': web_fetch}
|
||||
|
||||
messages = [{'role': 'user', 'content': "what is ollama's new engine"}]
|
||||
|
||||
while True:
|
||||
response = chat(
|
||||
model='qwen3:4b',
|
||||
messages=messages,
|
||||
tools=[web_search, web_fetch],
|
||||
think=True
|
||||
)
|
||||
if response.message.thinking:
|
||||
print('Thinking: ', response.message.thinking)
|
||||
if response.message.content:
|
||||
print('Content: ', response.message.content)
|
||||
messages.append(response.message)
|
||||
if response.message.tool_calls:
|
||||
print('Tool calls: ', response.message.tool_calls)
|
||||
for tool_call in response.message.tool_calls:
|
||||
function_to_call = available_tools.get(tool_call.function.name)
|
||||
if function_to_call:
|
||||
args = tool_call.function.arguments
|
||||
result = function_to_call(**args)
|
||||
print('Result: ', str(result)[:200]+'...')
|
||||
# Result is truncated for limited context lengths
|
||||
messages.append({'role': 'tool', 'content': str(result)[:2000 * 4], 'tool_name': tool_call.function.name})
|
||||
else:
|
||||
messages.append({'role': 'tool', 'content': f'Tool {tool_call.function.name} not found', 'tool_name': tool_call.function.name})
|
||||
else:
|
||||
break
|
||||
```
|
||||
|
||||
**Result**
|
||||
|
||||
```
|
||||
Thinking: Okay, the user is asking about Ollama's new engine. I need to figure out what they're referring to. Ollama is a company that develops large language models, so maybe they've released a new model or an updated version of their existing engine....
|
||||
|
||||
Tool calls: [ToolCall(function=Function(name='web_search', arguments={'max_results': 3, 'query': 'Ollama new engine'}))]
|
||||
Result: results=[WebSearchResult(content='# New model scheduling\n\n## September 23, 2025\n\nOllama now includes a significantly improved model scheduling system. Ahead of running a model, Ollama’s new engine
|
||||
|
||||
Thinking: Okay, the user asked about Ollama's new engine. Let me look at the search results.
|
||||
|
||||
First result is from September 23, 2025, talking about new model scheduling. It mentions improved memory management, reduced crashes, better GPU utilization, and multi-GPU performance. Examples show speed improvements and accurate memory reporting. Supported models include gemma3, llama4, qwen3, etc...
|
||||
|
||||
Content: Ollama has introduced two key updates to its engine, both released in 2025:
|
||||
|
||||
1. **Enhanced Model Scheduling (September 23, 2025)**
|
||||
- **Precision Memory Management**: Exact memory allocation reduces out-of-memory crashes and optimizes GPU utilization.
|
||||
- **Performance Gains**: Examples show significant speed improvements (e.g., 85.54 tokens/s vs 52.02 tokens/s) and full GPU layer utilization.
|
||||
- **Multi-GPU Support**: Improved efficiency across multiple GPUs, with accurate memory reporting via tools like `nvidia-smi`.
|
||||
- **Supported Models**: Includes `gemma3`, `llama4`, `qwen3`, `mistral-small3.2`, and more.
|
||||
|
||||
2. **Multimodal Engine (May 15, 2025)**
|
||||
- **Vision Support**: First-class support for vision models, including `llama4:scout` (109B parameters), `gemma3`, `qwen2.5vl`, and `mistral-small3.1`.
|
||||
- **Multimodal Tasks**: Examples include identifying animals in multiple images, answering location-based questions from videos, and document scanning.
|
||||
|
||||
These updates highlight Ollama's focus on efficiency, performance, and expanded capabilities for both text and vision tasks.
|
||||
```
|
||||
|
||||
### Context length and agents
|
||||
|
||||
Web search results can return thousands of tokens. It is recommended to increase the context length of the model to at least ~32000 tokens. Search agents work best with full context length. [Ollama's cloud models](https://docs.ollama.com/cloud) run at the full context length.
|
||||
|
||||
## MCP Server
|
||||
|
||||
You can enable web search in any MCP client through the [Python MCP server](https://github.com/ollama/ollama-python/blob/main/examples/web-search-mcp.py).
|
||||
|
||||
### Cline
|
||||
|
||||
Ollama's web search can be integrated with Cline easily using the MCP server configuration.
|
||||
|
||||
`Manage MCP Servers` > `Configure MCP Servers` > Add the following configuration:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"web_search_and_fetch": {
|
||||
"type": "stdio",
|
||||
"command": "uv",
|
||||
"args": ["run", "path/to/web-search-mcp.py"],
|
||||
"env": { "OLLAMA_API_KEY": "your_api_key_here" }
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Codex
|
||||
|
||||
Ollama works well with OpenAI's Codex tool.
|
||||
|
||||
Add the following configuration to `~/.codex/config.toml`
|
||||
|
||||
```python
|
||||
[mcp_servers.web_search]
|
||||
command = "uv"
|
||||
args = ["run", "path/to/web-search-mcp.py"]
|
||||
env = { "OLLAMA_API_KEY" = "your_api_key_here" }
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Goose
|
||||
|
||||
Ollama can integrate with Goose via its MCP feature.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
### Other integrations
|
||||
|
||||
Ollama can be integrated into most of the tools available either through direct integration of Ollama's API, Python / JavaScript libraries, OpenAI compatible API, and MCP server integration.
|
||||
91
docs/cli.mdx
Normal file
@@ -0,0 +1,91 @@
|
||||
---
|
||||
title: CLI Reference
|
||||
---
|
||||
|
||||
### Run a model
|
||||
|
||||
```
|
||||
ollama run gemma3
|
||||
```
|
||||
|
||||
#### Multiline input
|
||||
|
||||
For multiline input, you can wrap text with `"""`:
|
||||
|
||||
```
|
||||
>>> """Hello,
|
||||
... world!
|
||||
... """
|
||||
I'm a basic program that prints the famous "Hello, world!" message to the console.
|
||||
```
|
||||
|
||||
#### Multimodal models
|
||||
|
||||
```
|
||||
ollama run gemma3 "What's in this image? /Users/jmorgan/Desktop/smile.png"
|
||||
```
|
||||
|
||||
### Download a model
|
||||
|
||||
```
|
||||
ollama pull gemma3
|
||||
```
|
||||
|
||||
### Remove a model
|
||||
|
||||
```
|
||||
ollama rm gemma3
|
||||
```
|
||||
|
||||
### List models
|
||||
|
||||
```
|
||||
ollama ls
|
||||
```
|
||||
|
||||
### Sign in to Ollama
|
||||
|
||||
```
|
||||
ollama signin
|
||||
```
|
||||
|
||||
### Sign out of Ollama
|
||||
|
||||
```
|
||||
ollama signout
|
||||
```
|
||||
|
||||
### Create a customized model
|
||||
|
||||
First, create a `Modelfile`
|
||||
|
||||
```
|
||||
FROM gemma3
|
||||
SYSTEM """You are a happy cat."""
|
||||
```
|
||||
|
||||
Then run `ollama create`:
|
||||
|
||||
```
|
||||
ollama create -f Modelfile
|
||||
```
|
||||
|
||||
### List running models
|
||||
|
||||
```
|
||||
ollama ps
|
||||
```
|
||||
|
||||
### Stop a running model
|
||||
|
||||
```
|
||||
ollama stop gemma3
|
||||
```
|
||||
|
||||
### Start Ollama
|
||||
|
||||
```
|
||||
ollama serve
|
||||
```
|
||||
|
||||
To view a list of environment variables that can be set run `ollama serve --help`
|
||||
217
docs/cloud.mdx
@@ -1,19 +1,33 @@
|
||||
# Cloud
|
||||
---
|
||||
title: Cloud
|
||||
sidebarTitle: Cloud
|
||||
---
|
||||
|
||||
| Ollama's cloud is currently in preview. For full documentation, see [Ollama's documentation](https://docs.ollama.com/cloud).
|
||||
<Info>Ollama's cloud is currently in preview.</Info>
|
||||
|
||||
## Cloud Models
|
||||
|
||||
[Cloud models](https://ollama.com/cloud) are a new kind of model in Ollama that can run without a powerful GPU. Instead, cloud models are automatically offloaded to Ollama's cloud while offering the same capabilities as local models, making it possible to keep using your local tools while running larger models that wouldn’t fit on a personal computer.
|
||||
Ollama's cloud models are a new kind of model in Ollama that can run without a powerful GPU. Instead, cloud models are automatically offloaded to Ollama's cloud service while offering the same capabilities as local models, making it possible to keep using your local tools while running larger models that wouldn't fit on a personal computer.
|
||||
|
||||
Ollama currently supports the following cloud models, with more coming soon:
|
||||
|
||||
- `deepseek-v3.1:671b-cloud`
|
||||
- `gpt-oss:20b-cloud`
|
||||
- `gpt-oss:120b-cloud`
|
||||
- `deepseek-v3.1:671b-cloud`
|
||||
- `kimi-k2:1t-cloud`
|
||||
- `qwen3-coder:480b-cloud`
|
||||
- `glm-4.6:cloud`
|
||||
|
||||
### Get started
|
||||
### Running Cloud models
|
||||
|
||||
Ollama's cloud models require an account on [ollama.com](https://ollama.com). To sign in or create an account, run:
|
||||
|
||||
```
|
||||
ollama signin
|
||||
```
|
||||
|
||||
<Tabs>
|
||||
<Tab title="CLI">
|
||||
|
||||
To run a cloud model, open the terminal and run:
|
||||
|
||||
@@ -21,20 +35,201 @@ To run a cloud model, open the terminal and run:
|
||||
ollama run gpt-oss:120b-cloud
|
||||
```
|
||||
|
||||
To run cloud models with integrations that work with Ollama, first download the cloud model:
|
||||
</Tab>
|
||||
<Tab title="Python">
|
||||
|
||||
First, pull a cloud model so it can be accessed:
|
||||
|
||||
```
|
||||
ollama pull qwen3-coder:480b-cloud
|
||||
ollama pull gpt-oss:120b-cloud
|
||||
```
|
||||
|
||||
Then sign in to Ollama:
|
||||
Next, install [Ollama's Python library](https://github.com/ollama/ollama-python):
|
||||
|
||||
```
|
||||
ollama signin
|
||||
pip install ollama
|
||||
```
|
||||
|
||||
Finally, access the model using the model name `qwen3-coder:480b-cloud` via Ollama's local API or tooling.
|
||||
Next, create and run a simple Python script:
|
||||
|
||||
```python
|
||||
from ollama import Client
|
||||
|
||||
client = Client()
|
||||
|
||||
messages = [
|
||||
{
|
||||
'role': 'user',
|
||||
'content': 'Why is the sky blue?',
|
||||
},
|
||||
]
|
||||
|
||||
for part in client.chat('gpt-oss:120b-cloud', messages=messages, stream=True):
|
||||
print(part['message']['content'], end='', flush=True)
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
|
||||
First, pull a cloud model so it can be accessed:
|
||||
|
||||
```
|
||||
ollama pull gpt-oss:120b-cloud
|
||||
```
|
||||
|
||||
Next, install [Ollama's JavaScript library](https://github.com/ollama/ollama-js):
|
||||
|
||||
```
|
||||
npm i ollama
|
||||
```
|
||||
|
||||
Then use the library to run a cloud model:
|
||||
|
||||
```typescript
|
||||
import { Ollama } from "ollama";
|
||||
|
||||
const ollama = new Ollama();
|
||||
|
||||
const response = await ollama.chat({
|
||||
model: "gpt-oss:120b-cloud",
|
||||
messages: [{ role: "user", content: "Explain quantum computing" }],
|
||||
stream: true,
|
||||
});
|
||||
|
||||
for await (const part of response) {
|
||||
process.stdout.write(part.message.content);
|
||||
}
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab title="cURL">
|
||||
|
||||
First, pull a cloud model so it can be accessed:
|
||||
|
||||
```
|
||||
ollama pull gpt-oss:120b-cloud
|
||||
```
|
||||
|
||||
Run the following cURL command to run the command via Ollama's API:
|
||||
|
||||
```
|
||||
curl http://localhost:11434/api/chat -d '{
|
||||
"model": "gpt-oss:120b-cloud",
|
||||
"messages": [{
|
||||
"role": "user",
|
||||
"content": "Why is the sky blue?"
|
||||
}],
|
||||
"stream": false
|
||||
}'
|
||||
```
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Cloud API access
|
||||
|
||||
Cloud models can also be accessed directly on ollama.com's API. For more information, see the [docs](https://docs.ollama.com/cloud).
|
||||
Cloud models can also be accessed directly on ollama.com's API. In this mode, ollama.com acts as a remote Ollama host.
|
||||
|
||||
### Authentication
|
||||
|
||||
For direct access to ollama.com's API, first create an [API key](https://ollama.com/settings/keys).
|
||||
|
||||
Then, set the `OLLAMA_API_KEY` environment variable to your API key.
|
||||
|
||||
```
|
||||
export OLLAMA_API_KEY=your_api_key
|
||||
```
|
||||
|
||||
### Listing models
|
||||
|
||||
For models available directly via Ollama's API, models can be listed via:
|
||||
|
||||
```
|
||||
curl https://ollama.com/api/tags
|
||||
```
|
||||
|
||||
### Generating a response
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Python">
|
||||
|
||||
First, install [Ollama's Python library](https://github.com/ollama/ollama-python)
|
||||
|
||||
```
|
||||
pip install ollama
|
||||
```
|
||||
|
||||
Then make a request
|
||||
|
||||
```python
|
||||
import os
|
||||
from ollama import Client
|
||||
|
||||
client = Client(
|
||||
host="https://ollama.com",
|
||||
headers={'Authorization': 'Bearer ' + os.environ.get('OLLAMA_API_KEY')}
|
||||
)
|
||||
|
||||
messages = [
|
||||
{
|
||||
'role': 'user',
|
||||
'content': 'Why is the sky blue?',
|
||||
},
|
||||
]
|
||||
|
||||
for part in client.chat('gpt-oss:120b', messages=messages, stream=True):
|
||||
print(part['message']['content'], end='', flush=True)
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
|
||||
First, install [Ollama's JavaScript library](https://github.com/ollama/ollama-js):
|
||||
|
||||
```
|
||||
npm i ollama
|
||||
```
|
||||
|
||||
Next, make a request to the model:
|
||||
|
||||
```typescript
|
||||
import { Ollama } from "ollama";
|
||||
|
||||
const ollama = new Ollama({
|
||||
host: "https://ollama.com",
|
||||
headers: {
|
||||
Authorization: "Bearer " + process.env.OLLAMA_API_KEY,
|
||||
},
|
||||
});
|
||||
|
||||
const response = await ollama.chat({
|
||||
model: "gpt-oss:120b",
|
||||
messages: [{ role: "user", content: "Explain quantum computing" }],
|
||||
stream: true,
|
||||
});
|
||||
|
||||
for await (const part of response) {
|
||||
process.stdout.write(part.message.content);
|
||||
}
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab title="cURL">
|
||||
|
||||
Generate a response via Ollama's chat API:
|
||||
|
||||
```
|
||||
curl https://ollama.com/api/chat \
|
||||
-H "Authorization: Bearer $OLLAMA_API_KEY" \
|
||||
-d '{
|
||||
"model": "gpt-oss:120b",
|
||||
"messages": [{
|
||||
"role": "user",
|
||||
"content": "Why is the sky blue?"
|
||||
}],
|
||||
"stream": false
|
||||
}'
|
||||
```
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
38
docs/context-length.mdx
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
title: Context length
|
||||
---
|
||||
|
||||
Context length is the maximum number of tokens that the model has access to in memory.
|
||||
|
||||
<Note>
|
||||
The default context length in Ollama is 4096 tokens.
|
||||
</Note>
|
||||
|
||||
Tasks which require large context like web search, agents, and coding tools should be set to at least 32000 tokens.
|
||||
|
||||
## Setting context length
|
||||
|
||||
Setting a larger context length will increase the amount of memory required to run a model. Ensure you have enough VRAM available to increase the context length.
|
||||
|
||||
Cloud models are set to their maximum context length by default.
|
||||
|
||||
### App
|
||||
|
||||
Change the slider in the Ollama app under settings to your desired context length.
|
||||

|
||||
|
||||
### CLI
|
||||
If editing the context length for Ollama is not possible, the context length can also be updated when serving Ollama.
|
||||
```
|
||||
OLLAMA_CONTEXT_LENGTH=32000 ollama serve
|
||||
```
|
||||
|
||||
### Check allocated context length and model offloading
|
||||
For best performance, use the maximum context length for a model, and avoid offloading the model to CPU. Verify the split under `PROCESSOR` using `ollama ps`.
|
||||
```
|
||||
ollama ps
|
||||
```
|
||||
```
|
||||
NAME ID SIZE PROCESSOR CONTEXT UNTIL
|
||||
gemma3:latest a2af6cc3eb7f 6.6 GB 100% GPU 65536 2 minutes from now
|
||||
```
|
||||
@@ -1,21 +1,21 @@
|
||||
# Ollama Docker image
|
||||
|
||||
### CPU only
|
||||
## CPU only
|
||||
|
||||
```shell
|
||||
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
|
||||
```
|
||||
|
||||
### Nvidia GPU
|
||||
## Nvidia GPU
|
||||
|
||||
Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installation).
|
||||
|
||||
#### Install with Apt
|
||||
### Install with Apt
|
||||
|
||||
1. Configure the repository
|
||||
|
||||
```shell
|
||||
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \
|
||||
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
|
||||
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
|
||||
curl -fsSL https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
|
||||
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
|
||||
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
|
||||
sudo apt-get update
|
||||
@@ -27,11 +27,12 @@ Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-
|
||||
sudo apt-get install -y nvidia-container-toolkit
|
||||
```
|
||||
|
||||
#### Install with Yum or Dnf
|
||||
### Install with Yum or Dnf
|
||||
|
||||
1. Configure the repository
|
||||
|
||||
```shell
|
||||
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo \
|
||||
curl -fsSL https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo \
|
||||
| sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
|
||||
```
|
||||
|
||||
@@ -41,23 +42,25 @@ Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-
|
||||
sudo yum install -y nvidia-container-toolkit
|
||||
```
|
||||
|
||||
#### Configure Docker to use Nvidia driver
|
||||
### Configure Docker to use Nvidia driver
|
||||
|
||||
```shell
|
||||
sudo nvidia-ctk runtime configure --runtime=docker
|
||||
sudo systemctl restart docker
|
||||
```
|
||||
|
||||
#### Start the container
|
||||
### Start the container
|
||||
|
||||
```shell
|
||||
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> If you're running on an NVIDIA JetPack system, Ollama can't automatically discover the correct JetPack version. Pass the environment variable JETSON_JETPACK=5 or JETSON_JETPACK=6 to the container to select version 5 or 6.
|
||||
<Note>
|
||||
If you're running on an NVIDIA JetPack system, Ollama can't automatically discover the correct JetPack version.
|
||||
Pass the environment variable `JETSON_JETPACK=5` or `JETSON_JETPACK=6` to the container to select version 5 or 6.
|
||||
</Note>
|
||||
|
||||
### AMD GPU
|
||||
## AMD GPU
|
||||
|
||||
To run Ollama using Docker with AMD GPUs, use the `rocm` tag and the following command:
|
||||
|
||||
@@ -65,7 +68,7 @@ To run Ollama using Docker with AMD GPUs, use the `rocm` tag and the following c
|
||||
docker run -d --device /dev/kfd --device /dev/dri -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:rocm
|
||||
```
|
||||
|
||||
### Run model locally
|
||||
## Run model locally
|
||||
|
||||
Now you can run a model:
|
||||
|
||||
@@ -73,6 +76,6 @@ Now you can run a model:
|
||||
docker exec -it ollama ollama run llama3.2
|
||||
```
|
||||
|
||||
### Try different models
|
||||
## Try different models
|
||||
|
||||
More models can be found on the [Ollama library](https://ollama.com/library).
|
||||
|
||||
155
docs/docs.json
Normal file
@@ -0,0 +1,155 @@
|
||||
{
|
||||
"$schema": "https://mintlify.com/docs.json",
|
||||
"name": "Ollama",
|
||||
"colors": {
|
||||
"primary": "#000",
|
||||
"light": "#b5b5b5",
|
||||
"dark": "#000"
|
||||
},
|
||||
"favicon": "/images/favicon.png",
|
||||
"logo": {
|
||||
"light": "/images/logo.png",
|
||||
"dark": "/images/logo-dark.png",
|
||||
"href": "https://ollama.com"
|
||||
},
|
||||
"theme": "maple",
|
||||
"background": {
|
||||
"color": {
|
||||
"light": "#ffffff",
|
||||
"dark": "#000000"
|
||||
}
|
||||
},
|
||||
"fonts": {
|
||||
"family": "system-ui",
|
||||
"heading": {
|
||||
"family": "system-ui"
|
||||
},
|
||||
"body": {
|
||||
"family": "system-ui"
|
||||
}
|
||||
},
|
||||
"styling": {
|
||||
"codeblocks": "system"
|
||||
},
|
||||
"contextual": {
|
||||
"options": ["copy"]
|
||||
},
|
||||
"navbar": {
|
||||
"links": [
|
||||
{
|
||||
"label": "Sign in",
|
||||
"href": "https://ollama.com/signin"
|
||||
}
|
||||
],
|
||||
"primary": {
|
||||
"type": "button",
|
||||
"label": "Download",
|
||||
"href": "https://ollama.com/download"
|
||||
}
|
||||
},
|
||||
"api": {
|
||||
"playground": {
|
||||
"display": "simple"
|
||||
},
|
||||
"examples": {
|
||||
"languages": ["curl"]
|
||||
}
|
||||
},
|
||||
"redirects": [
|
||||
{
|
||||
"source": "/openai",
|
||||
"destination": "/api/openai"
|
||||
}
|
||||
],
|
||||
"navigation": {
|
||||
"tabs": [
|
||||
{
|
||||
"tab": "Documentation",
|
||||
"groups": [
|
||||
{
|
||||
"group": "Get started",
|
||||
"pages": [
|
||||
"index",
|
||||
"quickstart",
|
||||
"/cloud"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Capabilities",
|
||||
"pages": [
|
||||
"/capabilities/streaming",
|
||||
"/capabilities/thinking",
|
||||
"/capabilities/structured-outputs",
|
||||
"/capabilities/vision",
|
||||
"/capabilities/embeddings",
|
||||
"/capabilities/tool-calling",
|
||||
"/capabilities/web-search"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Integrations",
|
||||
"pages": [
|
||||
"/integrations/vscode",
|
||||
"/integrations/jetbrains",
|
||||
"/integrations/codex",
|
||||
"/integrations/cline",
|
||||
"/integrations/droid",
|
||||
"/integrations/goose",
|
||||
"/integrations/zed",
|
||||
"/integrations/roo-code",
|
||||
"/integrations/n8n",
|
||||
"/integrations/xcode"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "More information",
|
||||
"pages": [
|
||||
"/cli",
|
||||
"/modelfile",
|
||||
"/context-length",
|
||||
"/linux",
|
||||
"/docker",
|
||||
"/faq",
|
||||
"/gpu",
|
||||
"/troubleshooting"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"tab": "API Reference",
|
||||
"openapi": "/openapi.yaml",
|
||||
"groups": [
|
||||
{
|
||||
"group": "API Reference",
|
||||
"pages": [
|
||||
"/api/index",
|
||||
"/api/authentication",
|
||||
"/api/streaming",
|
||||
"/api/usage",
|
||||
"/api/errors",
|
||||
"/api/openai-compatibility"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Endpoints",
|
||||
"pages": [
|
||||
"POST /api/generate",
|
||||
"POST /api/chat",
|
||||
"POST /api/embed",
|
||||
"GET /api/tags",
|
||||
"GET /api/ps",
|
||||
"POST /api/show",
|
||||
"POST /api/create",
|
||||
"POST /api/copy",
|
||||
"POST /api/pull",
|
||||
"POST /api/push",
|
||||
"DELETE /api/delete",
|
||||
"GET /api/version"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
102
docs/faq.mdx
@@ -1,4 +1,6 @@
|
||||
# FAQ
|
||||
---
|
||||
title: FAQ
|
||||
---
|
||||
|
||||
## How can I upgrade Ollama?
|
||||
|
||||
@@ -20,9 +22,9 @@ Please refer to the [GPU docs](./gpu.md).
|
||||
|
||||
## How can I specify the context window size?
|
||||
|
||||
By default, Ollama uses a context window size of 4096 tokens for most models. The `gpt-oss` model has a default context window size of 8192 tokens.
|
||||
By default, Ollama uses a context window size of 2048 tokens.
|
||||
|
||||
This can be overridden in Settings in the Windows and macOS App, or with the `OLLAMA_CONTEXT_LENGTH` environment variable. For example, to set the default context window to 8K, use:
|
||||
This can be overridden with the `OLLAMA_CONTEXT_LENGTH` environment variable. For example, to set the default context window to 8K, use:
|
||||
|
||||
```shell
|
||||
OLLAMA_CONTEXT_LENGTH=8192 ollama serve
|
||||
@@ -46,8 +48,6 @@ curl http://localhost:11434/api/generate -d '{
|
||||
}'
|
||||
```
|
||||
|
||||
Setting the context length higher may cause the model to not be able to fit onto the GPU which make the model run more slowly.
|
||||
|
||||
## How can I tell if my model was loaded onto the GPU?
|
||||
|
||||
Use the `ollama ps` command to see what models are currently loaded into memory.
|
||||
@@ -56,17 +56,16 @@ Use the `ollama ps` command to see what models are currently loaded into memory.
|
||||
ollama ps
|
||||
```
|
||||
|
||||
> **Output**:
|
||||
>
|
||||
> ```
|
||||
> NAME ID SIZE PROCESSOR CONTEXT UNTIL
|
||||
> gpt-oss:20b 05afbac4bad6 16 GB 100% GPU 8192 4 minutes from now
|
||||
> ```
|
||||
<Info>
|
||||
**Output**: ``` NAME ID SIZE PROCESSOR UNTIL llama3:70b bcfb190ca3a7 42 GB
|
||||
100% GPU 4 minutes from now ```
|
||||
</Info>
|
||||
|
||||
The `Processor` column will show which memory the model was loaded in to:
|
||||
* `100% GPU` means the model was loaded entirely into the GPU
|
||||
* `100% CPU` means the model was loaded entirely in system memory
|
||||
* `48%/52% CPU/GPU` means the model was loaded partially onto both the GPU and into system memory
|
||||
|
||||
- `100% GPU` means the model was loaded entirely into the GPU
|
||||
- `100% CPU` means the model was loaded entirely in system memory
|
||||
- `48%/52% CPU/GPU` means the model was loaded partially onto both the GPU and into system memory
|
||||
|
||||
## How do I configure Ollama server?
|
||||
|
||||
@@ -126,8 +125,10 @@ On Windows, Ollama inherits your user and system environment variables.
|
||||
|
||||
Ollama pulls models from the Internet and may require a proxy server to access the models. Use `HTTPS_PROXY` to redirect outbound requests through the proxy. Ensure the proxy certificate is installed as a system certificate. Refer to the section above for how to use environment variables on your platform.
|
||||
|
||||
> [!NOTE]
|
||||
> Avoid setting `HTTP_PROXY`. Ollama does not use HTTP for model pulls, only HTTPS. Setting `HTTP_PROXY` may interrupt client connections to the server.
|
||||
<Note>
|
||||
Avoid setting `HTTP_PROXY`. Ollama does not use HTTP for model pulls, only
|
||||
HTTPS. Setting `HTTP_PROXY` may interrupt client connections to the server.
|
||||
</Note>
|
||||
|
||||
### How do I use Ollama behind a proxy in Docker?
|
||||
|
||||
@@ -150,11 +151,9 @@ docker build -t ollama-with-ca .
|
||||
docker run -d -e HTTPS_PROXY=https://my.proxy.example.com -p 11434:11434 ollama-with-ca
|
||||
```
|
||||
|
||||
## Does Ollama send my prompts and responses back to ollama.com?
|
||||
## Does Ollama send my prompts and answers back to ollama.com?
|
||||
|
||||
If you're running a model locally, your prompts and responses will always stay on your machine. Ollama Turbo in the App allows you to run your queries on Ollama's servers if you don't have a powerful enough GPU. Web search lets a model query the web, giving you more accurate and up-to-date information. Both Turbo and web search require sending your prompts and responses to Ollama.com. This data is neither logged nor stored.
|
||||
|
||||
If you don't want to see the Turbo and web search options in the app, you can disable them in Settings by turning on Airplane mode. In Airplane mode, all models will run locally, and your prompts and responses will stay on your machine.
|
||||
No. Ollama runs locally, and conversation data does not leave your machine.
|
||||
|
||||
## How can I expose Ollama on my network?
|
||||
|
||||
@@ -216,7 +215,9 @@ Refer to the section [above](#how-do-i-configure-ollama-server) for how to set e
|
||||
|
||||
If a different directory needs to be used, set the environment variable `OLLAMA_MODELS` to the chosen directory.
|
||||
|
||||
> Note: on Linux using the standard installer, the `ollama` user needs read and write access to the specified directory. To assign the directory to the `ollama` user run `sudo chown -R ollama:ollama <directory>`.
|
||||
<Note>
|
||||
On Linux using the standard installer, the `ollama` user needs read and write access to the specified directory. To assign the directory to the `ollama` user run `sudo chown -R ollama:ollama <directory>`.
|
||||
</Note>
|
||||
|
||||
Refer to the section [above](#how-do-i-configure-ollama-server) for how to set environment variables on your platform.
|
||||
|
||||
@@ -235,7 +236,7 @@ GPU acceleration is not available for Docker Desktop in macOS due to the lack of
|
||||
This can impact both installing Ollama, as well as downloading models.
|
||||
|
||||
Open `Control Panel > Networking and Internet > View network status and tasks` and click on `Change adapter settings` on the left panel. Find the `vEthernel (WSL)` adapter, right click and select `Properties`.
|
||||
Click on `Configure` and open the `Advanced` tab. Search through each of the properties until you find `Large Send Offload Version 2 (IPv4)` and `Large Send Offload Version 2 (IPv6)`. *Disable* both of these
|
||||
Click on `Configure` and open the `Advanced` tab. Search through each of the properties until you find `Large Send Offload Version 2 (IPv4)` and `Large Send Offload Version 2 (IPv6)`. _Disable_ both of these
|
||||
properties.
|
||||
|
||||
## How can I preload a model into Ollama to get faster response times?
|
||||
@@ -269,10 +270,11 @@ ollama stop llama3.2
|
||||
```
|
||||
|
||||
If you're using the API, use the `keep_alive` parameter with the `/api/generate` and `/api/chat` endpoints to set the amount of time that a model stays in memory. The `keep_alive` parameter can be set to:
|
||||
* a duration string (such as "10m" or "24h")
|
||||
* a number in seconds (such as 3600)
|
||||
* any negative number which will keep the model loaded in memory (e.g. -1 or "-1m")
|
||||
* '0' which will unload the model immediately after generating a response
|
||||
|
||||
- a duration string (such as "10m" or "24h")
|
||||
- a number in seconds (such as 3600)
|
||||
- any negative number which will keep the model loaded in memory (e.g. -1 or "-1m")
|
||||
- '0' which will unload the model immediately after generating a response
|
||||
|
||||
For example, to preload a model and leave it in memory use:
|
||||
|
||||
@@ -296,7 +298,7 @@ If too many requests are sent to the server, it will respond with a 503 error in
|
||||
|
||||
## How does Ollama handle concurrent requests?
|
||||
|
||||
Ollama supports two levels of concurrent processing. If your system has sufficient available memory (system memory when using CPU inference, or VRAM for GPU inference) then multiple models can be loaded at the same time. For a given model, if there is sufficient available memory when the model is loaded, it can be configured to allow parallel request processing.
|
||||
Ollama supports two levels of concurrent processing. If your system has sufficient available memory (system memory when using CPU inference, or VRAM for GPU inference) then multiple models can be loaded at the same time. For a given model, if there is sufficient available memory when the model is loaded, it is configured to allow parallel request processing.
|
||||
|
||||
If there is insufficient available memory to load a new model request while one or more models are already loaded, all new requests will be queued until the new model can be loaded. As prior models become idle, one or more will be unloaded to make room for the new model. Queued requests will be processed in order. When using GPU inference new models must be able to completely fit in VRAM to allow concurrent model loads.
|
||||
|
||||
@@ -304,8 +306,8 @@ Parallel request processing for a given model results in increasing the context
|
||||
|
||||
The following server settings may be used to adjust how Ollama handles concurrent requests on most platforms:
|
||||
|
||||
- `OLLAMA_MAX_LOADED_MODELS` - The maximum number of models that can be loaded concurrently provided they fit in available memory. The default is 3 * the number of GPUs or 3 for CPU inference.
|
||||
- `OLLAMA_NUM_PARALLEL` - The maximum number of parallel requests each model will process at the same time. The default is 1, and will handle 1 request per model at a time.
|
||||
- `OLLAMA_MAX_LOADED_MODELS` - The maximum number of models that can be loaded concurrently provided they fit in available memory. The default is 3 \* the number of GPUs or 3 for CPU inference.
|
||||
- `OLLAMA_NUM_PARALLEL` - The maximum number of parallel requests each model will process at the same time. The default will auto-select either 4 or 1 based on available memory.
|
||||
- `OLLAMA_MAX_QUEUE` - The maximum number of requests Ollama will queue when busy before rejecting additional requests. The default is 512
|
||||
|
||||
Note: Windows with Radeon GPUs currently default to 1 model maximum due to limitations in ROCm v5.7 for available VRAM reporting. Once ROCm v6.2 is available, Windows Radeon will follow the defaults above. You may enable concurrent model loads on Radeon on Windows, but ensure you don't load more models than will fit into your GPUs VRAM.
|
||||
@@ -326,7 +328,10 @@ To use quantized K/V cache with Ollama you can set the following environment var
|
||||
|
||||
- `OLLAMA_KV_CACHE_TYPE` - The quantization type for the K/V cache. Default is `f16`.
|
||||
|
||||
> Note: Currently this is a global option - meaning all models will run with the specified quantization type.
|
||||
<Note>
|
||||
Currently this is a global option - meaning all models will run with the
|
||||
specified quantization type.
|
||||
</Note>
|
||||
|
||||
The currently available K/V cache quantization types are:
|
||||
|
||||
@@ -338,15 +343,36 @@ How much the cache quantization impacts the model's response quality will depend
|
||||
|
||||
You may need to experiment with different quantization types to find the best balance between memory usage and quality.
|
||||
|
||||
## How can I stop Ollama from starting when I login to my computer
|
||||
## Where can I find my Ollama Public Key?
|
||||
|
||||
Ollama for Windows and macOS register as a login item during installation. You can disable this if you prefer not to have Ollama automatically start. Ollama will respect this setting across upgrades, unless you uninstall the application.
|
||||
Your **Ollama Public Key** is the public part of the key pair that lets your local Ollama instance talk to [ollama.com](https://ollama.com).
|
||||
|
||||
**Windows**
|
||||
- Remove `%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup\Ollama.lnk`
|
||||
You'll need it to:
|
||||
* Push models to Ollama
|
||||
* Pull private models from Ollama to your machine
|
||||
* Run models hosted in [Ollama Cloud](https://ollama.com/cloud)
|
||||
|
||||
**MacOS Monterey (v12)**
|
||||
- Open `Settings` -> `Users & Groups` -> `Login Items` and find the `Ollama` entry, then click the `-` (minus) to remove
|
||||
### How to Add the Key
|
||||
|
||||
**MacOS Ventura (v13) and later**
|
||||
- Open `Settings` and search for "Login Items", find the `Ollama` entry under "Allow in the Background`, then click the slider to disable.
|
||||
* **Sign-in via the Settings page** in the **Mac** and **Windows App**
|
||||
|
||||
* **Sign‑in via CLI**
|
||||
|
||||
```shell
|
||||
ollama signin
|
||||
```
|
||||
|
||||
* **Manually copy & paste** the key on the **Ollama Keys** page:
|
||||
[https://ollama.com/settings/keys](https://ollama.com/settings/keys)
|
||||
|
||||
### Where the Ollama Public Key lives
|
||||
|
||||
| OS | Path to `id_ed25519.pub` |
|
||||
| :- | :- |
|
||||
| macOS | `~/.ollama/id_ed25519.pub` |
|
||||
| Linux | `/usr/share/ollama/.ollama/id_ed25519.pub` |
|
||||
| Windows | `C:\Users\<username>\.ollama\id_ed25519.pub` |
|
||||
|
||||
<Note>
|
||||
Replace <username> with your actual Windows user name.
|
||||
</Note>
|
||||
|
||||
3
docs/favicon-dark.svg
Normal file
|
After Width: | Height: | Size: 6.7 KiB |
3
docs/favicon.svg
Normal file
|
After Width: | Height: | Size: 6.5 KiB |
46
docs/gpu.mdx
@@ -1,28 +1,25 @@
|
||||
# GPU
|
||||
---
|
||||
title: Hardware support
|
||||
---
|
||||
|
||||
## Nvidia
|
||||
Ollama supports Nvidia GPUs with compute capability 5.0+ and driver version 531 and newer.
|
||||
|
||||
Ollama supports Nvidia GPUs with compute capability 5.0+.
|
||||
|
||||
Check your compute compatibility to see if your card is supported:
|
||||
[https://developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus)
|
||||
|
||||
| Compute Capability | Family | Cards |
|
||||
| ------------------ | ------------------- | ----------------------------------------------------------------------------------------------------------- |
|
||||
| 12.0 | GeForce RTX 50xx | `RTX 5060` `RTX 5060 Ti` `RTX 5070` `RTX 5070 Ti` `RTX 5080` `RTX 5090` |
|
||||
| | NVIDIA Professioal | `RTX PRO 4000 Blackwell` `RTX PRO 4500 Blackwell` `RTX PRO 5000 Blackwell` `RTX PRO 6000 Blackwell` |
|
||||
| 11.0 | Jetson | `T4000` `T5000` (Requires driver 580 or newer) |
|
||||
| 10.3 | NVIDIA Professioal | `B300` `GB300` (Requires driver 580 or newer) |
|
||||
| 10.0 | NVIDIA Professioal | `B200` `GB200` (Requires driver 580 or newer) |
|
||||
| 9.0 | NVIDIA | `H200` `H100` `GH200` |
|
||||
| ------------------ | ------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 9.0 | NVIDIA | `H200` `H100` |
|
||||
| 8.9 | GeForce RTX 40xx | `RTX 4090` `RTX 4080 SUPER` `RTX 4080` `RTX 4070 Ti SUPER` `RTX 4070 Ti` `RTX 4070 SUPER` `RTX 4070` `RTX 4060 Ti` `RTX 4060` |
|
||||
| | NVIDIA Professional | `L4` `L40` `RTX 6000` |
|
||||
| 8.7 | Jetson | `Orin Nano` `Orin NX` `AGX Orin` |
|
||||
| 8.6 | GeForce RTX 30xx | `RTX 3090 Ti` `RTX 3090` `RTX 3080 Ti` `RTX 3080` `RTX 3070 Ti` `RTX 3070` `RTX 3060 Ti` `RTX 3060` `RTX 3050 Ti` `RTX 3050` |
|
||||
| | NVIDIA Professional | `A40` `RTX A6000` `RTX A5000` `RTX A4000` `RTX A3000` `RTX A2000` `A10` `A16` `A2` |
|
||||
| 8.0 | NVIDIA | `A100` `A30` |
|
||||
| 7.5 | GeForce GTX/RTX | `GTX 1650 Ti` `TITAN RTX` `RTX 2080 Ti` `RTX 2080` `RTX 2070` `RTX 2060` |
|
||||
| | NVIDIA Professional | `T4` `RTX 5000` `RTX 4000` `RTX 3000` `T2000` `T1200` `T1000` `T600` `T500` |
|
||||
| | Quadro | `RTX 8000` `RTX 6000` `RTX 5000` `RTX 4000` |
|
||||
| 7.2 | Jetson | `Xavier NX` `AGX Xavier` (Jetpack 5) |
|
||||
| 7.0 | NVIDIA | `TITAN V` `V100` `Quadro GV100` |
|
||||
| 6.1 | NVIDIA TITAN | `TITAN Xp` `TITAN X` |
|
||||
| | GeForce GTX | `GTX 1080 Ti` `GTX 1080` `GTX 1070 Ti` `GTX 1070` `GTX 1060` `GTX 1050 Ti` `GTX 1050` |
|
||||
@@ -53,28 +50,28 @@ driver bug by reloading the NVIDIA UVM driver with `sudo rmmod nvidia_uvm &&
|
||||
sudo modprobe nvidia_uvm`
|
||||
|
||||
## AMD Radeon
|
||||
|
||||
Ollama supports the following AMD GPUs:
|
||||
|
||||
### Linux Support
|
||||
| Family | Cards and accelerators |
|
||||
| -------------- | -------------------------------------------------------------------------------------------------------------------- |
|
||||
| AMD Radeon RX | `7900 XTX` `7900 XT` `7900 GRE` `7800 XT` `7700 XT` `7600 XT` `7600` `6950 XT` `6900 XTX` `6900XT` `6800 XT` `6800` |
|
||||
| AMD Radeon PRO | `W7900` `W7800` `W7700` `W7600` `W7500` `W6900X` `W6800X Duo` `W6800X` `W6800` `V620` `V420` `V340` `V320` |
|
||||
| AMD Instinct | `MI300X` `MI300A` `MI300` `MI250X` `MI250` `MI210` `MI200` `MI100` |
|
||||
|
||||
### Windows Support
|
||||
With ROCm v6.2, the following GPUs are supported on Windows.
|
||||
|
||||
| Family | Cards and accelerators |
|
||||
| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| AMD Radeon RX | `7900 XTX` `7900 XT` `7900 GRE` `7800 XT` `7700 XT` `7600 XT` `7600` `6950 XT` `6900 XTX` `6900XT` `6800 XT` `6800` `Vega 64` `Vega 56` |
|
||||
| AMD Radeon PRO | `W7900` `W7800` `W7700` `W7600` `W7500` `W6900X` `W6800X Duo` `W6800X` `W6800` `V620` `V420` `V340` `V320` `Vega II Duo` `Vega II` `VII` `SSG` |
|
||||
| AMD Instinct | `MI300X` `MI300A` `MI300` `MI250X` `MI250` `MI210` `MI200` `MI100` `MI60` `MI50` |
|
||||
|
||||
### Windows Support
|
||||
|
||||
With ROCm v6.1, the following GPUs are supported on Windows.
|
||||
|
||||
| Family | Cards and accelerators |
|
||||
| -------------- | ------------------------------------------------------------------------------------------------------------------- |
|
||||
| AMD Radeon RX | `7900 XTX` `7900 XT` `7900 GRE` `7800 XT` `7700 XT` `7600 XT` `7600` `6950 XT` `6900 XTX` `6900XT` `6800 XT` `6800` |
|
||||
| AMD Radeon PRO | `W7900` `W7800` `W7700` `W7600` `W7500` `W6900X` `W6800X Duo` `W6800X` `W6800` `V620` |
|
||||
|
||||
### Known Workarounds
|
||||
|
||||
- The RX Vega 56 requires `HSA_ENABLE_SDMA=0` to disable SDMA
|
||||
|
||||
### Overrides on Linux
|
||||
|
||||
Ollama leverages the AMD ROCm library, which does not support all AMD GPUs. In
|
||||
some cases you can force the system to try to use a similar LLVM target that is
|
||||
close. For example The Radeon RX 5400 is `gfx1034` (also known as 10.3.4)
|
||||
@@ -93,6 +90,8 @@ At this time, the known supported GPU types on linux are the following LLVM Targ
|
||||
This table shows some example GPUs that map to these LLVM targets:
|
||||
| **LLVM Target** | **An Example GPU** |
|
||||
|-----------------|---------------------|
|
||||
| gfx900 | Radeon RX Vega 56 |
|
||||
| gfx906 | Radeon Instinct MI50 |
|
||||
| gfx908 | Radeon Instinct MI100 |
|
||||
| gfx90a | Radeon Instinct MI210 |
|
||||
| gfx940 | Radeon Instinct MI300 |
|
||||
@@ -124,4 +123,5 @@ accessing the AMD GPU devices. On the host system you can run
|
||||
`sudo setsebool container_use_devices=1` to allow containers to use devices.
|
||||
|
||||
### Metal (Apple GPUs)
|
||||
|
||||
Ollama supports GPU acceleration on Apple devices via the Metal API.
|
||||
|
||||
BIN
docs/images/cline-mcp.png
Normal file
|
After Width: | Height: | Size: 556 KiB |
BIN
docs/images/cline-settings.png
Normal file
|
After Width: | Height: | Size: 76 KiB |
BIN
docs/images/codex-mcp.png
Normal file
|
After Width: | Height: | Size: 948 KiB |
BIN
docs/images/favicon.png
Normal file
|
After Width: | Height: | Size: 890 B |
BIN
docs/images/goose-cli.png
Normal file
|
After Width: | Height: | Size: 160 KiB |
BIN
docs/images/goose-mcp-1.png
Normal file
|
After Width: | Height: | Size: 877 KiB |
BIN
docs/images/goose-mcp-2.png
Normal file
|
After Width: | Height: | Size: 911 KiB |
BIN
docs/images/goose-settings.png
Normal file
|
After Width: | Height: | Size: 109 KiB |
BIN
docs/images/intellij-chat-sidebar.png
Normal file
|
After Width: | Height: | Size: 69 KiB |
BIN
docs/images/intellij-current-model.png
Normal file
|
After Width: | Height: | Size: 106 KiB |
BIN
docs/images/intellij-local-models.png
Normal file
|
After Width: | Height: | Size: 79 KiB |
BIN
docs/images/logo-dark.png
Normal file
|
After Width: | Height: | Size: 3.3 KiB |
BIN
docs/images/logo.png
Normal file
|
After Width: | Height: | Size: 2.7 KiB |
BIN
docs/images/n8n-chat-model.png
Normal file
|
After Width: | Height: | Size: 87 KiB |
BIN
docs/images/n8n-chat-node.png
Normal file
|
After Width: | Height: | Size: 70 KiB |
BIN
docs/images/n8n-credential-creation.png
Normal file
|
After Width: | Height: | Size: 43 KiB |
BIN
docs/images/n8n-models.png
Normal file
|
After Width: | Height: | Size: 130 KiB |
BIN
docs/images/n8n-ollama-form.png
Normal file
|
After Width: | Height: | Size: 53 KiB |
BIN
docs/images/ollama-settings.png
Normal file
|
After Width: | Height: | Size: 3.6 MiB |
BIN
docs/images/vscode-model-options.png
Normal file
|
After Width: | Height: | Size: 77 KiB |
BIN
docs/images/vscode-models.png
Normal file
|
After Width: | Height: | Size: 56 KiB |
BIN
docs/images/vscode-sidebar.png
Normal file
|
After Width: | Height: | Size: 25 KiB |
BIN
docs/images/welcome.png
Normal file
|
After Width: | Height: | Size: 233 KiB |
BIN
docs/images/xcode-chat-icon.png
Normal file
|
After Width: | Height: | Size: 186 KiB |
BIN
docs/images/xcode-intelligence-window.png
Normal file
|
After Width: | Height: | Size: 182 KiB |
BIN
docs/images/xcode-locally-hosted.png
Normal file
|
After Width: | Height: | Size: 146 KiB |
BIN
docs/images/zed-ollama-dropdown.png
Normal file
|
After Width: | Height: | Size: 38 KiB |
BIN
docs/images/zed-settings.png
Normal file
|
After Width: | Height: | Size: 57 KiB |
@@ -1,11 +1,13 @@
|
||||
# Importing a model
|
||||
---
|
||||
title: Importing a Model
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
* [Importing a Safetensors adapter](#Importing-a-fine-tuned-adapter-from-Safetensors-weights)
|
||||
* [Importing a Safetensors model](#Importing-a-model-from-Safetensors-weights)
|
||||
* [Importing a GGUF file](#Importing-a-GGUF-based-model-or-adapter)
|
||||
* [Sharing models on ollama.com](#Sharing-your-model-on-ollamacom)
|
||||
- [Importing a Safetensors adapter](#Importing-a-fine-tuned-adapter-from-Safetensors-weights)
|
||||
- [Importing a Safetensors model](#Importing-a-model-from-Safetensors-weights)
|
||||
- [Importing a GGUF file](#Importing-a-GGUF-based-model-or-adapter)
|
||||
- [Sharing models on ollama.com](#Sharing-your-model-on-ollamacom)
|
||||
|
||||
## Importing a fine tuned adapter from Safetensors weights
|
||||
|
||||
@@ -32,16 +34,15 @@ ollama run my-model
|
||||
|
||||
Ollama supports importing adapters based on several different model architectures including:
|
||||
|
||||
* Llama (including Llama 2, Llama 3, Llama 3.1, and Llama 3.2);
|
||||
* Mistral (including Mistral 1, Mistral 2, and Mixtral); and
|
||||
* Gemma (including Gemma 1 and Gemma 2)
|
||||
- Llama (including Llama 2, Llama 3, Llama 3.1, and Llama 3.2);
|
||||
- Mistral (including Mistral 1, Mistral 2, and Mixtral); and
|
||||
- Gemma (including Gemma 1 and Gemma 2)
|
||||
|
||||
You can create the adapter using a fine tuning framework or tool which can output adapters in the Safetensors format, such as:
|
||||
|
||||
* Hugging Face [fine tuning framework](https://huggingface.co/docs/transformers/en/training)
|
||||
* [Unsloth](https://github.com/unslothai/unsloth)
|
||||
* [MLX](https://github.com/ml-explore/mlx)
|
||||
|
||||
- Hugging Face [fine tuning framework](https://huggingface.co/docs/transformers/en/training)
|
||||
- [Unsloth](https://github.com/unslothai/unsloth)
|
||||
- [MLX](https://github.com/ml-explore/mlx)
|
||||
|
||||
## Importing a model from Safetensors weights
|
||||
|
||||
@@ -53,8 +54,6 @@ FROM /path/to/safetensors/directory
|
||||
|
||||
If you create the Modelfile in the same directory as the weights, you can use the command `FROM .`.
|
||||
|
||||
If you do not create the Modelfile, ollama will act as if there was a Modelfile with the command `FROM .`.
|
||||
|
||||
Now run the `ollama create` command from the directory where you created the `Modelfile`:
|
||||
|
||||
```shell
|
||||
@@ -69,19 +68,20 @@ ollama run my-model
|
||||
|
||||
Ollama supports importing models for several different architectures including:
|
||||
|
||||
* Llama (including Llama 2, Llama 3, Llama 3.1, and Llama 3.2);
|
||||
* Mistral (including Mistral 1, Mistral 2, and Mixtral);
|
||||
* Gemma (including Gemma 1 and Gemma 2); and
|
||||
* Phi3
|
||||
- Llama (including Llama 2, Llama 3, Llama 3.1, and Llama 3.2);
|
||||
- Mistral (including Mistral 1, Mistral 2, and Mixtral);
|
||||
- Gemma (including Gemma 1 and Gemma 2); and
|
||||
- Phi3
|
||||
|
||||
This includes importing foundation models as well as any fine tuned models which have been _fused_ with a foundation model.
|
||||
|
||||
## Importing a GGUF based model or adapter
|
||||
|
||||
If you have a GGUF based model or adapter it is possible to import it into Ollama. You can obtain a GGUF model or adapter by:
|
||||
|
||||
* converting a Safetensors model with the `convert_hf_to_gguf.py` from Llama.cpp;
|
||||
* converting a Safetensors adapter with the `convert_lora_to_gguf.py` from Llama.cpp; or
|
||||
* downloading a model or adapter from a place such as HuggingFace
|
||||
- converting a Safetensors model with the `convert_hf_to_gguf.py` from Llama.cpp;
|
||||
- converting a Safetensors adapter with the `convert_lora_to_gguf.py` from Llama.cpp; or
|
||||
- downloading a model or adapter from a place such as HuggingFace
|
||||
|
||||
To import a GGUF model, create a `Modelfile` containing:
|
||||
|
||||
@@ -98,9 +98,9 @@ ADAPTER /path/to/file.gguf
|
||||
|
||||
When importing a GGUF adapter, it's important to use the same base model as the base model that the adapter was created with. You can use:
|
||||
|
||||
* a model from Ollama
|
||||
* a GGUF file
|
||||
* a Safetensors based model
|
||||
- a model from Ollama
|
||||
- a GGUF file
|
||||
- a Safetensors based model
|
||||
|
||||
Once you have created your `Modelfile`, use the `ollama create` command to build the model.
|
||||
|
||||
@@ -134,13 +134,22 @@ success
|
||||
|
||||
### Supported Quantizations
|
||||
|
||||
- `q4_0`
|
||||
- `q4_1`
|
||||
- `q5_0`
|
||||
- `q5_1`
|
||||
- `q8_0`
|
||||
|
||||
#### K-means Quantizations
|
||||
|
||||
- `q3_K_S`
|
||||
- `q3_K_M`
|
||||
- `q3_K_L`
|
||||
- `q4_K_S`
|
||||
- `q4_K_M`
|
||||
|
||||
- `q5_K_S`
|
||||
- `q5_K_M`
|
||||
- `q6_K`
|
||||
|
||||
## Sharing your model on ollama.com
|
||||
|
||||
@@ -148,7 +157,7 @@ You can share any model you have created by pushing it to [ollama.com](https://o
|
||||
|
||||
First, use your browser to go to the [Ollama Sign-Up](https://ollama.com/signup) page. If you already have an account, you can skip this step.
|
||||
|
||||
<img src="images/signup.png" alt="Sign-Up" width="40%">
|
||||
<img src="images/signup.png" alt="Sign-Up" width="40%" />
|
||||
|
||||
The `Username` field will be used as part of your model's name (e.g. `jmorganca/mymodel`), so make sure you are comfortable with the username that you have selected.
|
||||
|
||||
@@ -156,7 +165,7 @@ Now that you have created an account and are signed-in, go to the [Ollama Keys S
|
||||
|
||||
Follow the directions on the page to determine where your Ollama Public Key is located.
|
||||
|
||||
<img src="images/ollama-keys.png" alt="Ollama Keys" width="80%">
|
||||
<img src="images/ollama-keys.png" alt="Ollama Keys" width="80%" />
|
||||
|
||||
Click on the `Add Ollama Public Key` button, and copy and paste the contents of your Ollama Public Key into the text field.
|
||||
|
||||
@@ -173,4 +182,3 @@ Once your model has been pushed, other users can pull and run it by using the co
|
||||
```shell
|
||||
ollama run myuser/mymodel
|
||||
```
|
||||
|
||||
|
||||
58
docs/index.mdx
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
title: Ollama's documentation
|
||||
sidebarTitle: Welcome
|
||||
---
|
||||
|
||||
<img src="/images/welcome.png" noZoom className="rounded-3xl" />
|
||||
|
||||
[Ollama](https://ollama.com) is the easiest way to get up and running with large language models such as gpt-oss, Gemma 3, DeepSeek-R1, Qwen3 and more.
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Quickstart" icon="rocket" href="/quickstart">
|
||||
Get up and running with your first model
|
||||
</Card>
|
||||
<Card
|
||||
title="Download Ollama"
|
||||
icon="download"
|
||||
href="https://ollama.com/download"
|
||||
>
|
||||
Download Ollama on macOS, Windows or Linux
|
||||
</Card>
|
||||
<Card title="Cloud" icon="cloud" href="/cloud">
|
||||
Ollama's cloud models offer larger models with better performance.
|
||||
</Card>
|
||||
<Card title="API reference" icon="terminal" href="/api">
|
||||
View Ollama's API reference
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Libraries
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card
|
||||
title="Ollama's Python Library"
|
||||
icon="python"
|
||||
href="https://github.com/ollama/ollama-python"
|
||||
>
|
||||
The official library for using Ollama with Python
|
||||
</Card>
|
||||
|
||||
<Card title="Ollama's JavaScript library" icon="js" href="https://github.com/ollama/ollama-js">
|
||||
The official library for using Ollama with JavaScript or TypeScript.
|
||||
</Card>
|
||||
<Card title="Community libraries" icon="github" href="https://github.com/ollama/ollama?tab=readme-ov-file#libraries-1">
|
||||
View a list of 20+ community-supported libraries for Ollama
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Community
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Discord" icon="discord" href="https://discord.gg/ollama">
|
||||
Join our Discord community
|
||||
</Card>
|
||||
|
||||
<Card title="Reddit" icon="reddit" href="https://reddit.com/r/ollama">
|
||||
Join our Reddit community
|
||||
</Card>
|
||||
</CardGroup>
|
||||
38
docs/integrations/cline.mdx
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
title: Cline
|
||||
---
|
||||
|
||||
## Install
|
||||
|
||||
Install [Cline](https://docs.cline.bot/getting-started/installing-cline) in your IDE.
|
||||
|
||||
|
||||
## Usage with Ollama
|
||||
|
||||
1. Open Cline settings > `API Configuration` and set `API Provider` to `Ollama`
|
||||
2. Select a model under `Model` or type one (e.g. `qwen3`)
|
||||
3. Update the context window to at least 32K tokens under `Context Window`
|
||||
|
||||
<Note>Coding tools require a larger context window. It is recommended to use a context window of at least 32K tokens. See [Context length](/context-length) for more information.</Note>
|
||||
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/cline-settings.png"
|
||||
alt="Cline settings configuration showing API Provider set to Ollama"
|
||||
width="50%"
|
||||
/>
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
## Connecting to ollama.com
|
||||
1. Create an [API key](https://ollama.com/settings/keys) from ollama.com
|
||||
2. Click on `Use custom base URL` and set it to `https://ollama.com`
|
||||
3. Enter your **Ollama API Key**
|
||||
4. Select a model from the list
|
||||
|
||||
|
||||
### Recommended Models
|
||||
|
||||
- `qwen3-coder:480b`
|
||||
- `deepseek-v3.1:671b`
|
||||
56
docs/integrations/codex.mdx
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
title: Codex
|
||||
---
|
||||
|
||||
|
||||
## Install
|
||||
|
||||
Install the [Codex CLI](https://developers.openai.com/codex/cli/):
|
||||
|
||||
```
|
||||
npm install -g @openai/codex
|
||||
```
|
||||
|
||||
## Usage with Ollama
|
||||
|
||||
<Note>Codex requires a larger context window. It is recommended to use a context window of at least 32K tokens.</Note>
|
||||
|
||||
To use `codex` with Ollama, use the `--oss` flag:
|
||||
|
||||
```
|
||||
codex --oss
|
||||
```
|
||||
|
||||
### Changing Models
|
||||
|
||||
By default, codex will use the local `gpt-oss:20b` model. However, you can specify a different model with the `-m` flag:
|
||||
|
||||
```
|
||||
codex --oss -m gpt-oss:120b
|
||||
```
|
||||
|
||||
### Cloud Models
|
||||
|
||||
```
|
||||
codex --oss -m gpt-oss:120b-cloud
|
||||
```
|
||||
|
||||
|
||||
## Connecting to ollama.com
|
||||
|
||||
|
||||
Create an [API key](https://ollama.com/settings/keys) from ollama.com and export it as `OLLAMA_API_KEY`.
|
||||
|
||||
To use ollama.com directly, edit your `~/.codex/config.toml` file to point to ollama.com.
|
||||
|
||||
```toml
|
||||
model = "gpt-oss:120b"
|
||||
model_provider = "ollama"
|
||||
|
||||
[model_providers.ollama]
|
||||
name = "Ollama"
|
||||
base_url = "https://ollama.com/v1"
|
||||
env_key = "OLLAMA_API_KEY"
|
||||
```
|
||||
|
||||
Run `codex` in a new terminal to load the new settings.
|
||||
76
docs/integrations/droid.mdx
Normal file
@@ -0,0 +1,76 @@
|
||||
---
|
||||
title: Droid
|
||||
---
|
||||
|
||||
|
||||
## Install
|
||||
|
||||
Install the [Droid CLI](https://factory.ai/):
|
||||
|
||||
```bash
|
||||
curl -fsSL https://app.factory.ai/cli | sh
|
||||
```
|
||||
|
||||
<Note>Droid requires a larger context window. It is recommended to use a context window of at least 32K tokens. See [Context length](/context-length) for more information.</Note>
|
||||
|
||||
## Usage with Ollama
|
||||
|
||||
Add a local configuration block to `~/.factory/config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"custom_models": [
|
||||
{
|
||||
"model_display_name": "qwen3-coder [Ollama]",
|
||||
"model": "qwen3-coder",
|
||||
"base_url": "http://localhost:11434/v1/",
|
||||
"api_key": "not-needed",
|
||||
"provider": "generic-chat-completion-api",
|
||||
"max_tokens": 32000
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Cloud Models
|
||||
`qwen3-coder:480b-cloud` is the recommended model for use with Droid.
|
||||
|
||||
Add the cloud configuration block to `~/.factory/config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"custom_models": [
|
||||
{
|
||||
"model_display_name": "qwen3-coder [Ollama Cloud]",
|
||||
"model": "qwen3-coder:480b-cloud",
|
||||
"base_url": "http://localhost:11434/v1/",
|
||||
"api_key": "not-needed",
|
||||
"provider": "generic-chat-completion-api",
|
||||
"max_tokens": 128000
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Connecting to ollama.com
|
||||
|
||||
1. Create an [API key](https://ollama.com/settings/keys) from ollama.com and export it as `OLLAMA_API_KEY`.
|
||||
2. Add the cloud configuration block to `~/.factory/config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"custom_models": [
|
||||
{
|
||||
"model_display_name": "qwen3-coder [Ollama Cloud]",
|
||||
"model": "qwen3-coder:480b",
|
||||
"base_url": "https://ollama.com/v1/",
|
||||
"api_key": "OLLAMA_API_KEY",
|
||||
"provider": "generic-chat-completion-api",
|
||||
"max_tokens": 128000
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Run `droid` in a new terminal to load the new settings.
|
||||
49
docs/integrations/goose.mdx
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
title: Goose
|
||||
---
|
||||
|
||||
## Goose Desktop
|
||||
|
||||
Install [Goose](https://block.github.io/goose/docs/getting-started/installation/) Desktop.
|
||||
|
||||
### Usage with Ollama
|
||||
1. In Goose, open **Settings** → **Configure Provider**.
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/goose-settings.png"
|
||||
alt="Goose settings Panel"
|
||||
width="75%"
|
||||
/>
|
||||
</div>
|
||||
2. Find **Ollama**, click **Configure**
|
||||
3. Confirm **API Host** is `http://localhost:11434` and click Submit
|
||||
|
||||
|
||||
### Connecting to ollama.com
|
||||
|
||||
1. Create an [API key](https://ollama.com/settings/keys) on ollama.com and save it in your `.env`
|
||||
2. In Goose, set **API Host** to `https://ollama.com`
|
||||
|
||||
|
||||
## Goose CLI
|
||||
|
||||
Install [Goose](https://block.github.io/goose/docs/getting-started/installation/) CLI
|
||||
|
||||
### Usage with Ollama
|
||||
1. Run `goose configure`
|
||||
2. Select **Configure Providers** and select **Ollama**
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/goose-cli.png"
|
||||
alt="Goose CLI"
|
||||
width="50%"
|
||||
/>
|
||||
</div>
|
||||
3. Enter model name (e.g `qwen3`)
|
||||
|
||||
### Connecting to ollama.com
|
||||
|
||||
1. Create an [API key](https://ollama.com/settings/keys) on ollama.com and save it in your `.env`
|
||||
2. Run `goose configure`
|
||||
3. Select **Configure Providers** and select **Ollama**
|
||||
4. Update **OLLAMA_HOST** to `https://ollama.com`
|
||||
47
docs/integrations/jetbrains.mdx
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
title: JetBrains
|
||||
---
|
||||
|
||||
<Note>This example uses **IntelliJ**; same steps apply to other JetBrains IDEs (e.g., PyCharm).</Note>
|
||||
|
||||
## Install
|
||||
|
||||
Install [IntelliJ](https://www.jetbrains.com/idea/).
|
||||
|
||||
## Usage with Ollama
|
||||
|
||||
<Note>
|
||||
To use **Ollama**, you will need a [JetBrains AI Subscription](https://www.jetbrains.com/ai-ides/buy/?section=personal&billing=yearly).
|
||||
</Note>
|
||||
|
||||
1. In Intellij, click the **chat icon** located in the right sidebar
|
||||
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/intellij-chat-sidebar.png"
|
||||
alt="Intellij Sidebar Chat"
|
||||
width="50%"
|
||||
/>
|
||||
</div>
|
||||
|
||||
2. Select the **current model** in the sidebar, then click **Set up Local Models**
|
||||
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/intellij-current-model.png"
|
||||
alt="Intellij model bottom right corner"
|
||||
width="50%"
|
||||
/>
|
||||
</div>
|
||||
|
||||
3. Under **Third Party AI Providers**, choose **Ollama**
|
||||
4. Confirm the **Host URL** is `http://localhost:11434`, then click **Ok**
|
||||
5. Once connected, select a model under **Local models by Ollama**
|
||||
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/intellij-local-models.png"
|
||||
alt="Zed star icon in bottom right corner"
|
||||
width="50%"
|
||||
/>
|
||||
</div>
|
||||
53
docs/integrations/n8n.mdx
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
title: n8n
|
||||
---
|
||||
|
||||
## Install
|
||||
|
||||
Install [n8n](https://docs.n8n.io/choose-n8n/).
|
||||
|
||||
## Using Ollama Locally
|
||||
|
||||
1. In the top right corner, click the dropdown and select **Create Credential**
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/n8n-credential-creation.png"
|
||||
alt="Create a n8n Credential"
|
||||
width="75%"
|
||||
/>
|
||||
</div>
|
||||
|
||||
2. Under **Add new credential** select **Ollama**
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/n8n-ollama-form.png"
|
||||
alt="Select Ollama under Credential"
|
||||
width="75%"
|
||||
/>
|
||||
</div>
|
||||
3. Confirm Base URL is set to `http://localhost:11434` and click **Save**
|
||||
<Note> If connecting to `http://localhost:11434` fails, use `http://127.0.0.1:11434`</Note>
|
||||
4. When creating a new workflow, select **Add a first step** and select an **Ollama node**
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/n8n-chat-node.png"
|
||||
alt="Add a first step with Ollama node"
|
||||
width="75%"
|
||||
/>
|
||||
</div>
|
||||
5. Select your model of choice (e.g. `qwen3-coder`)
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/n8n-models.png"
|
||||
alt="Set up Ollama credentials"
|
||||
width="75%"
|
||||
/>
|
||||
</div>
|
||||
|
||||
## Connecting to ollama.com
|
||||
1. Create an [API key](https://ollama.com/settings/keys) on **ollama.com**.
|
||||
2. In n8n, click **Create Credential** and select **Ollama**
|
||||
4. Set the **API URL** to `https://ollama.com`
|
||||
5. Enter your **API Key** and click **Save**
|
||||
|
||||
|
||||
30
docs/integrations/roo-code.mdx
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
title: Roo Code
|
||||
---
|
||||
|
||||
|
||||
## Install
|
||||
|
||||
Install [Roo Code](https://marketplace.visualstudio.com/items?itemName=RooVeterinaryInc.roo-cline) from the VS Code Marketplace.
|
||||
|
||||
## Usage with Ollama
|
||||
|
||||
1. Open Roo Code in VS Code and click the **gear icon** on the top right corner of the Roo Code window to open **Provider Settings**
|
||||
2. Set `API Provider` to `Ollama`
|
||||
3. (Optional) Update `Base URL` if your Ollama instance is running remotely. The default is `http://localhost:11434`
|
||||
4. Enter a valid `Model ID` (for example `qwen3` or `qwen3-coder:480b-cloud`)
|
||||
5. Adjust the `Context Window` to at least 32K tokens for coding tasks
|
||||
|
||||
<Note>Coding tools require a larger context window. It is recommended to use a context window of at least 32K tokens. See [Context length](/context-length) for more information.</Note>
|
||||
|
||||
## Connecting to ollama.com
|
||||
|
||||
1. Create an [API key](https://ollama.com/settings/keys) from ollama.com
|
||||
2. Enable `Use custom base URL` and set it to `https://ollama.com`
|
||||
3. Enter your **Ollama API Key**
|
||||
4. Select a model from the list
|
||||
|
||||
### Recommended Models
|
||||
|
||||
- `qwen3-coder:480b`
|
||||
- `deepseek-v3.1:671b`
|
||||
34
docs/integrations/vscode.mdx
Normal file
@@ -0,0 +1,34 @@
|
||||
---
|
||||
title: VS Code
|
||||
---
|
||||
|
||||
## Install
|
||||
|
||||
Install [VSCode](https://code.visualstudio.com/download).
|
||||
|
||||
## Usage with Ollama
|
||||
|
||||
1. Open Copilot side bar found in top right window
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/vscode-sidebar.png"
|
||||
alt="VSCode chat Sidebar"
|
||||
width="75%"
|
||||
/>
|
||||
</div>
|
||||
2. Select the model drowpdown > **Manage models**
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/vscode-models.png"
|
||||
alt="VSCode model picker"
|
||||
width="75%"
|
||||
/>
|
||||
</div>
|
||||
3. Enter **Ollama** under **Provider Dropdown** and select desired models (e.g `qwen3, qwen3-coder:480b-cloud`)
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/vscode-model-options.png"
|
||||
alt="VSCode model options dropdown"
|
||||
width="75%"
|
||||
/>
|
||||
</div>
|
||||
45
docs/integrations/xcode.mdx
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
title: Xcode
|
||||
---
|
||||
|
||||
## Install
|
||||
|
||||
Install [XCode](https://developer.apple.com/xcode/)
|
||||
|
||||
|
||||
## Usage with Ollama
|
||||
<Note> Ensure Apple Intelligence is setup and the latest XCode version is v26.0 </Note>
|
||||
|
||||
1. Click **XCode** in top left corner > **Settings**
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/xcode-intelligence-window.png"
|
||||
alt="Xcode Intelligence window"
|
||||
width="50%"
|
||||
/>
|
||||
</div>
|
||||
|
||||
2. Select **Locally Hosted**, enter port **11434** and click **Add**
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/xcode-locally-hosted.png"
|
||||
alt="Xcode settings"
|
||||
width="50%"
|
||||
/>
|
||||
</div>
|
||||
|
||||
3. Select the **star icon** on the top left corner and click the **dropdown**
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/xcode-chat-icon.png"
|
||||
alt="Xcode settings"
|
||||
width="50%"
|
||||
/>
|
||||
</div>
|
||||
4. Click **My Account** and select your desired model
|
||||
|
||||
|
||||
## Connecting to ollama.com directly
|
||||
1. Create an [API key](https://ollama.com/settings/keys) from ollama.com
|
||||
2. Select **Internet Hosted** and enter URL as `https://ollama.com`
|
||||
3. Enter your **Ollama API Key** and click **Add**
|
||||
38
docs/integrations/zed.mdx
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
title: Zed
|
||||
---
|
||||
|
||||
## Install
|
||||
|
||||
Install [Zed](https://zed.dev/download).
|
||||
|
||||
## Usage with Ollama
|
||||
|
||||
1. In Zed, click the **star icon** in the bottom-right corner, then select **Configure**.
|
||||
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/zed-settings.png"
|
||||
alt="Zed star icon in bottom right corner"
|
||||
width="50%"
|
||||
/>
|
||||
</div>
|
||||
|
||||
2. Under **LLM Providers**, choose **Ollama**
|
||||
3. Confirm the **Host URL** is `http://localhost:11434`, then click **Connect**
|
||||
4. Once connected, select a model under **Ollama**
|
||||
|
||||
<div style={{ display: 'flex', justifyContent: 'center' }}>
|
||||
<img
|
||||
src="/images/zed-ollama-dropdown.png"
|
||||
alt="Zed star icon in bottom right corner"
|
||||
width="50%"
|
||||
/>
|
||||
</div>
|
||||
|
||||
## Connecting to ollama.com
|
||||
1. Create an [API key](https://ollama.com/settings/keys) on **ollama.com**
|
||||
2. In Zed, open the **star icon** → **Configure**
|
||||
3. Under **LLM Providers**, select **Ollama**
|
||||
4. Set the **API URL** to `https://ollama.com`
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
# Linux
|
||||
---
|
||||
title: Linux
|
||||
---
|
||||
|
||||
## Install
|
||||
|
||||
@@ -10,15 +12,16 @@ curl -fsSL https://ollama.com/install.sh | sh
|
||||
|
||||
## Manual install
|
||||
|
||||
> [!NOTE]
|
||||
> If you are upgrading from a prior version, you **MUST** remove the old libraries with `sudo rm -rf /usr/lib/ollama` first.
|
||||
<Note>
|
||||
If you are upgrading from a prior version, you should remove the old libraries
|
||||
with `sudo rm -rf /usr/lib/ollama` first.
|
||||
</Note>
|
||||
|
||||
Download and extract the package:
|
||||
|
||||
```shell
|
||||
curl -LO https://ollama.com/download/ollama-linux-amd64.tgz
|
||||
sudo rm -rf /usr/lib/ollama
|
||||
sudo tar -C /usr -xzf ollama-linux-amd64.tgz
|
||||
curl -fsSL https://ollama.com/download/ollama-linux-amd64.tgz \
|
||||
| sudo tar zx -C /usr
|
||||
```
|
||||
|
||||
Start Ollama:
|
||||
@@ -35,15 +38,11 @@ ollama -v
|
||||
|
||||
### AMD GPU install
|
||||
|
||||
If you have an AMD GPU, **also** download and extract the additional ROCm package:
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The ROCm tgz contains only AMD dependent libraries. You must extract **both** `ollama-linux-amd64.tgz` and `ollama-linux-amd64-rocm.tgz` into the same location.
|
||||
|
||||
If you have an AMD GPU, also download and extract the additional ROCm package:
|
||||
|
||||
```shell
|
||||
curl -L https://ollama.com/download/ollama-linux-amd64-rocm.tgz -o ollama-linux-amd64-rocm.tgz
|
||||
sudo tar -C /usr -xzf ollama-linux-amd64-rocm.tgz
|
||||
curl -fsSL https://ollama.com/download/ollama-linux-amd64-rocm.tgz \
|
||||
| sudo tar zx -C /usr
|
||||
```
|
||||
|
||||
### ARM64 install
|
||||
@@ -51,8 +50,8 @@ sudo tar -C /usr -xzf ollama-linux-amd64-rocm.tgz
|
||||
Download and extract the ARM64-specific package:
|
||||
|
||||
```shell
|
||||
curl -L https://ollama.com/download/ollama-linux-arm64.tgz -o ollama-linux-arm64.tgz
|
||||
sudo tar -C /usr -xzf ollama-linux-arm64.tgz
|
||||
curl -fsSL https://ollama.com/download/ollama-linux-arm64.tgz \
|
||||
| sudo tar zx -C /usr
|
||||
```
|
||||
|
||||
### Adding Ollama as a startup service (recommended)
|
||||
@@ -113,12 +112,13 @@ sudo systemctl start ollama
|
||||
sudo systemctl status ollama
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> While AMD has contributed the `amdgpu` driver upstream to the official linux
|
||||
> kernel source, the version is older and may not support all ROCm features. We
|
||||
> recommend you install the latest driver from
|
||||
> [AMD](https://www.amd.com/en/support/download/linux-drivers.html) for best support
|
||||
> of your Radeon GPU.
|
||||
<Note>
|
||||
While AMD has contributed the `amdgpu` driver upstream to the official linux
|
||||
kernel source, the version is older and may not support all ROCm features. We
|
||||
recommend you install the latest driver from
|
||||
https://www.amd.com/en/support/linux-drivers for best support of your Radeon
|
||||
GPU.
|
||||
</Note>
|
||||
|
||||
## Customizing
|
||||
|
||||
@@ -146,8 +146,8 @@ curl -fsSL https://ollama.com/install.sh | sh
|
||||
Or by re-downloading Ollama:
|
||||
|
||||
```shell
|
||||
curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
|
||||
sudo tar -C /usr -xzf ollama-linux-amd64.tgz
|
||||
curl -fsSL https://ollama.com/download/ollama-linux-amd64.tgz \
|
||||
| sudo tar zx -C /usr
|
||||
```
|
||||
|
||||
## Installing specific versions
|
||||
@@ -178,6 +178,12 @@ sudo systemctl disable ollama
|
||||
sudo rm /etc/systemd/system/ollama.service
|
||||
```
|
||||
|
||||
Remove ollama libraries from your lib directory (either `/usr/local/lib`, `/usr/lib`, or `/lib`):
|
||||
|
||||
```shell
|
||||
sudo rm -r $(which ollama | tr 'bin' 'lib')
|
||||
```
|
||||
|
||||
Remove the ollama binary from your bin directory (either `/usr/local/bin`, `/usr/bin`, or `/bin`):
|
||||
|
||||
```shell
|
||||
@@ -187,13 +193,7 @@ sudo rm $(which ollama)
|
||||
Remove the downloaded models and Ollama service user and group:
|
||||
|
||||
```shell
|
||||
sudo rm -r /usr/share/ollama
|
||||
sudo userdel ollama
|
||||
sudo groupdel ollama
|
||||
```
|
||||
|
||||
Remove installed libraries:
|
||||
|
||||
```shell
|
||||
sudo rm -rf /usr/local/lib/ollama
|
||||
sudo rm -r /usr/share/ollama
|
||||
```
|
||||
|
||||
3
docs/logo.svg
Normal file
|
After Width: | Height: | Size: 6.7 KiB |
@@ -1,9 +1,8 @@
|
||||
# Ollama Model File
|
||||
---
|
||||
title: Modelfile Reference
|
||||
---
|
||||
|
||||
> [!NOTE]
|
||||
> `Modelfile` syntax is in development
|
||||
|
||||
A model file is the blueprint to create and share models with Ollama.
|
||||
A Modelfile is the blueprint to create and share customized models using Ollama.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
@@ -73,26 +72,23 @@ To view the Modelfile of a given model, use the `ollama show --modelfile` comman
|
||||
ollama show --modelfile llama3.2
|
||||
```
|
||||
|
||||
> **Output**:
|
||||
>
|
||||
> ```
|
||||
> # Modelfile generated by "ollama show"
|
||||
> # To build a new Modelfile based on this one, replace the FROM line with:
|
||||
> # FROM llama3.2:latest
|
||||
> FROM /Users/pdevine/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29
|
||||
> TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
|
||||
>
|
||||
> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
|
||||
>
|
||||
> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
|
||||
>
|
||||
> {{ .Response }}<|eot_id|>"""
|
||||
> PARAMETER stop "<|start_header_id|>"
|
||||
> PARAMETER stop "<|end_header_id|>"
|
||||
> PARAMETER stop "<|eot_id|>"
|
||||
> PARAMETER stop "<|reserved_special_token"
|
||||
> ```
|
||||
```
|
||||
# Modelfile generated by "ollama show"
|
||||
# To build a new Modelfile based on this one, replace the FROM line with:
|
||||
# FROM llama3.2:latest
|
||||
FROM /Users/pdevine/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29
|
||||
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
|
||||
|
||||
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
|
||||
|
||||
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
|
||||
|
||||
{{ .Response }}<|eot_id|>"""
|
||||
PARAMETER stop "<|start_header_id|>"
|
||||
PARAMETER stop "<|end_header_id|>"
|
||||
PARAMETER stop "<|eot_id|>"
|
||||
PARAMETER stop "<|reserved_special_token"
|
||||
```
|
||||
|
||||
## Instructions
|
||||
|
||||
@@ -110,10 +106,13 @@ FROM <model name>:<tag>
|
||||
FROM llama3.2
|
||||
```
|
||||
|
||||
A list of available base models:
|
||||
<https://github.com/ollama/ollama#model-library>
|
||||
Additional models can be found at:
|
||||
<https://ollama.com/library>
|
||||
<Card title="Base Models" href="https://github.com/ollama/ollama#model-library">
|
||||
A list of available base models
|
||||
</Card>
|
||||
|
||||
<Card title="Base Models" href="https://ollama.com/library">
|
||||
Additional models can be found at
|
||||
</Card>
|
||||
|
||||
#### Build from a Safetensors model
|
||||
|
||||
@@ -124,10 +123,11 @@ FROM <model directory>
|
||||
The model directory should contain the Safetensors weights for a supported architecture.
|
||||
|
||||
Currently supported model architectures:
|
||||
* Llama (including Llama 2, Llama 3, Llama 3.1, and Llama 3.2)
|
||||
* Mistral (including Mistral 1, Mistral 2, and Mixtral)
|
||||
* Gemma (including Gemma 1 and Gemma 2)
|
||||
* Phi3
|
||||
|
||||
- Llama (including Llama 2, Llama 3, Llama 3.1, and Llama 3.2)
|
||||
- Mistral (including Mistral 1, Mistral 2, and Mixtral)
|
||||
- Gemma (including Gemma 1 and Gemma 2)
|
||||
- Phi3
|
||||
|
||||
#### Build from a GGUF file
|
||||
|
||||
@@ -137,7 +137,6 @@ FROM ./ollama-model.gguf
|
||||
|
||||
The GGUF file location should be specified as an absolute path or relative to the `Modelfile` location.
|
||||
|
||||
|
||||
### PARAMETER
|
||||
|
||||
The `PARAMETER` instruction defines a parameter that can be set when the model is run.
|
||||
@@ -149,8 +148,11 @@ PARAMETER <parameter> <parametervalue>
|
||||
#### Valid Parameters and Values
|
||||
|
||||
| Parameter | Description | Value Type | Example Usage |
|
||||
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- | -------------------- |
|
||||
| num_ctx | Sets the size of the context window used to generate the next token. (Default: 4096) | int | num_ctx 4096 |
|
||||
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- | -------------------- |
|
||||
| mirostat | Enable Mirostat sampling for controlling perplexity. (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0) | int | mirostat 0 |
|
||||
| mirostat_eta | Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1) | float | mirostat_eta 0.1 |
|
||||
| mirostat_tau | Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0) | float | mirostat_tau 5.0 |
|
||||
| num_ctx | Sets the size of the context window used to generate the next token. (Default: 2048) | int | num_ctx 4096 |
|
||||
| repeat_last_n | Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx) | int | repeat_last_n 64 |
|
||||
| repeat_penalty | Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1) | float | repeat_penalty 1.1 |
|
||||
| temperature | The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8) | float | temperature 0.7 |
|
||||
@@ -159,7 +161,7 @@ PARAMETER <parameter> <parametervalue>
|
||||
| num_predict | Maximum number of tokens to predict when generating text. (Default: -1, infinite generation) | int | num_predict 42 |
|
||||
| top_k | Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40) | int | top_k 40 |
|
||||
| top_p | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9) | float | top_p 0.9 |
|
||||
| min_p | Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter *p* represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with *p*=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0) | float | min_p 0.05 |
|
||||
| min_p | Alternative to the top*p, and aims to ensure a balance of quality and variety. The parameter \_p* represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with _p_=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. (Default: 0.0) | float | min_p 0.05 |
|
||||
|
||||
### TEMPLATE
|
||||
|
||||
@@ -201,9 +203,10 @@ ADAPTER <path to safetensor adapter>
|
||||
```
|
||||
|
||||
Currently supported Safetensor adapters:
|
||||
* Llama (including Llama 2, Llama 3, and Llama 3.1)
|
||||
* Mistral (including Mistral 1, Mistral 2, and Mixtral)
|
||||
* Gemma (including Gemma 1 and Gemma 2)
|
||||
|
||||
- Llama (including Llama 2, Llama 3, and Llama 3.1)
|
||||
- Mistral (including Mistral 1, Mistral 2, and Mixtral)
|
||||
- Gemma (including Gemma 1 and Gemma 2)
|
||||
|
||||
#### GGUF adapter
|
||||
|
||||
@@ -237,7 +240,6 @@ MESSAGE <role> <message>
|
||||
| user | An example message of what the user could have asked. |
|
||||
| assistant | An example message of how the model should respond. |
|
||||
|
||||
|
||||
#### Example conversation
|
||||
|
||||
```
|
||||
@@ -249,7 +251,6 @@ MESSAGE user Is Ontario in Canada?
|
||||
MESSAGE assistant yes
|
||||
```
|
||||
|
||||
|
||||
## Notes
|
||||
|
||||
- the **`Modelfile` is not case sensitive**. In the examples, uppercase instructions are used to make it easier to distinguish it from arguments.
|
||||
|
||||
3
docs/ollama-logo.svg
Normal file
|
After Width: | Height: | Size: 6.8 KiB |
BIN
docs/ollama.png
Normal file
|
After Width: | Height: | Size: 7.3 KiB |
1413
docs/openapi.yaml
Normal file
103
docs/quickstart.mdx
Normal file
@@ -0,0 +1,103 @@
|
||||
---
|
||||
title: Quickstart
|
||||
---
|
||||
|
||||
This quickstart will walk your through running your first model with Ollama. To get started, download Ollama on macOS, Windows or Linux.
|
||||
|
||||
<a
|
||||
href="https://ollama.com/download"
|
||||
target="_blank"
|
||||
className="inline-block px-6 py-2 bg-black rounded-full dark:bg-neutral-700 text-white font-normal border-none"
|
||||
>
|
||||
Download Ollama
|
||||
</a>
|
||||
|
||||
## Run a model
|
||||
|
||||
<Tabs>
|
||||
<Tab title="CLI">
|
||||
Open a terminal and run the command:
|
||||
|
||||
```
|
||||
ollama run gemma3
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab title="cURL">
|
||||
```
|
||||
ollama pull gemma3
|
||||
```
|
||||
|
||||
Lastly, chat with the model:
|
||||
|
||||
```shell
|
||||
curl http://localhost:11434/api/chat -d '{
|
||||
"model": "gemma3",
|
||||
"messages": [{
|
||||
"role": "user",
|
||||
"content": "Hello there!"
|
||||
}],
|
||||
"stream": false
|
||||
}'
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab title="Python">
|
||||
Start by downloading a model:
|
||||
|
||||
```
|
||||
ollama pull gemma3
|
||||
```
|
||||
|
||||
Then install Ollama's Python library:
|
||||
|
||||
```
|
||||
pip install ollama
|
||||
```
|
||||
|
||||
Lastly, chat with the model:
|
||||
|
||||
```python
|
||||
from ollama import chat
|
||||
from ollama import ChatResponse
|
||||
|
||||
response: ChatResponse = chat(model='gemma3', messages=[
|
||||
{
|
||||
'role': 'user',
|
||||
'content': 'Why is the sky blue?',
|
||||
},
|
||||
])
|
||||
print(response['message']['content'])
|
||||
# or access fields directly from the response object
|
||||
print(response.message.content)
|
||||
```
|
||||
|
||||
</Tab>
|
||||
<Tab title="JavaScript">
|
||||
Start by downloading a model:
|
||||
|
||||
```
|
||||
ollama pull gemma3
|
||||
```
|
||||
|
||||
Then install the Ollama JavaScript library:
|
||||
```
|
||||
npm i ollama
|
||||
```
|
||||
|
||||
Lastly, chat with the model:
|
||||
|
||||
```shell
|
||||
import ollama from 'ollama'
|
||||
|
||||
const response = await ollama.chat({
|
||||
model: 'gemma3',
|
||||
messages: [{ role: 'user', content: 'Why is the sky blue?' }],
|
||||
})
|
||||
console.log(response.message.content)
|
||||
```
|
||||
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
See a full list of available models [here](https://ollama.com/models).
|
||||
16
docs/styling.css
Normal file
@@ -0,0 +1,16 @@
|
||||
body {
|
||||
font-family: ui-sans-serif, system-ui, sans-serif, Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji;
|
||||
}
|
||||
|
||||
pre, code, .font-mono {
|
||||
font-family: ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;
|
||||
}
|
||||
|
||||
.nav-logo {
|
||||
height: 44px;
|
||||
}
|
||||
|
||||
.eyebrow {
|
||||
color: #666;
|
||||
font-weight: 400;
|
||||
}
|
||||
@@ -1,4 +1,6 @@
|
||||
# Template
|
||||
---
|
||||
title: Template
|
||||
---
|
||||
|
||||
Ollama provides a powerful templating engine backed by Go's built-in templating engine to construct prompts for your large language model. This feature is a valuable tool to get the most out of your models.
|
||||
|
||||
@@ -6,13 +8,13 @@ Ollama provides a powerful templating engine backed by Go's built-in templating
|
||||
|
||||
A basic Go template consists of three main parts:
|
||||
|
||||
* **Layout**: The overall structure of the template.
|
||||
* **Variables**: Placeholders for dynamic data that will be replaced with actual values when the template is rendered.
|
||||
* **Functions**: Custom functions or logic that can be used to manipulate the template's content.
|
||||
- **Layout**: The overall structure of the template.
|
||||
- **Variables**: Placeholders for dynamic data that will be replaced with actual values when the template is rendered.
|
||||
- **Functions**: Custom functions or logic that can be used to manipulate the template's content.
|
||||
|
||||
Here's an example of a simple chat template:
|
||||
|
||||
```go
|
||||
```gotmpl
|
||||
{{- range .Messages }}
|
||||
{{ .Role }}: {{ .Content }}
|
||||
{{- end }}
|
||||
@@ -20,9 +22,9 @@ Here's an example of a simple chat template:
|
||||
|
||||
In this example, we have:
|
||||
|
||||
* A basic messages structure (layout)
|
||||
* Three variables: `Messages`, `Role`, and `Content` (variables)
|
||||
* A custom function (action) that iterates over an array of items (`range .Messages`) and displays each item
|
||||
- A basic messages structure (layout)
|
||||
- Three variables: `Messages`, `Role`, and `Content` (variables)
|
||||
- A custom function (action) that iterates over an array of items (`range .Messages`) and displays each item
|
||||
|
||||
## Adding templates to your model
|
||||
|
||||
@@ -99,9 +101,9 @@ TEMPLATE """{{- if .System }}<|start_header_id|>system<|end_header_id|>
|
||||
|
||||
Keep the following tips and best practices in mind when working with Go templates:
|
||||
|
||||
* **Be mindful of dot**: Control flow structures like `range` and `with` changes the value `.`
|
||||
* **Out-of-scope variables**: Use `$.` to reference variables not currently in scope, starting from the root
|
||||
* **Whitespace control**: Use `-` to trim leading (`{{-`) and trailing (`-}}`) whitespace
|
||||
- **Be mindful of dot**: Control flow structures like `range` and `with` changes the value `.`
|
||||
- **Out-of-scope variables**: Use `$.` to reference variables not currently in scope, starting from the root
|
||||
- **Whitespace control**: Use `-` to trim leading (`{{-`) and trailing (`-}}`) whitespace
|
||||
|
||||
## Examples
|
||||
|
||||
@@ -155,13 +157,14 @@ CodeLlama [7B](https://ollama.com/library/codellama:7b-code) and [13B](https://o
|
||||
<PRE> {{ .Prompt }} <SUF>{{ .Suffix }} <MID>
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> CodeLlama 34B and 70B code completion and all instruct and Python fine-tuned models do not support fill-in-middle.
|
||||
<Note>
|
||||
CodeLlama 34B and 70B code completion and all instruct and Python fine-tuned models do not support fill-in-middle.
|
||||
</Note>
|
||||
|
||||
#### Codestral
|
||||
|
||||
Codestral [22B](https://ollama.com/library/codestral:22b) supports fill-in-middle.
|
||||
|
||||
```go
|
||||
```gotmpl
|
||||
[SUFFIX]{{ .Suffix }}[PREFIX] {{ .Prompt }}
|
||||
```
|
||||
|
||||
@@ -1,4 +1,7 @@
|
||||
# How to troubleshoot issues
|
||||
---
|
||||
title: Troubleshooting
|
||||
description: How to troubleshoot issues encountered with Ollama
|
||||
---
|
||||
|
||||
Sometimes Ollama may not perform as expected. One of the best ways to figure out what happened is to take a look at the logs. Find the logs on **Mac** by running the command:
|
||||
|
||||
@@ -23,9 +26,11 @@ docker logs <container-name>
|
||||
If manually running `ollama serve` in a terminal, the logs will be on that terminal.
|
||||
|
||||
When you run Ollama on **Windows**, there are a few different locations. You can view them in the explorer window by hitting `<cmd>+R` and type in:
|
||||
|
||||
- `explorer %LOCALAPPDATA%\Ollama` to view logs. The most recent server logs will be in `server.log` and older logs will be in `server-#.log`
|
||||
- `explorer %LOCALAPPDATA%\Programs\Ollama` to browse the binaries (The installer adds this to your user PATH)
|
||||
- `explorer %HOMEPATH%\.ollama` to browse where models and configuration is stored
|
||||
- `explorer %TEMP%` where temporary executable files are stored in one or more `ollama*` directories
|
||||
|
||||
To enable additional debug logging to help troubleshoot problems, first **Quit the running app from the tray menu** then in a powershell terminal
|
||||
|
||||
@@ -38,14 +43,26 @@ Join the [Discord](https://discord.gg/ollama) for help interpreting the logs.
|
||||
|
||||
## LLM libraries
|
||||
|
||||
Ollama includes multiple LLM libraries compiled for different GPU libraries and versions. Ollama tries to pick the best one based on the capabilities of your system. If this autodetection has problems, or you run into other problems (e.g. crashes in your GPU) you can workaround this by forcing a specific LLM library.
|
||||
Ollama includes multiple LLM libraries compiled for different GPUs and CPU vector features. Ollama tries to pick the best one based on the capabilities of your system. If this autodetection has problems, or you run into other problems (e.g. crashes in your GPU) you can workaround this by forcing a specific LLM library. `cpu_avx2` will perform the best, followed by `cpu_avx` an the slowest but most compatible is `cpu`. Rosetta emulation under MacOS will work with the `cpu` library.
|
||||
|
||||
In the server log, you will see a message that looks something like this (varies from release to release):
|
||||
|
||||
```
|
||||
Dynamic LLM libraries [rocm_v6 cpu cpu_avx cpu_avx2 cuda_v11 rocm_v5]
|
||||
```
|
||||
|
||||
**Experimental LLM Library Override**
|
||||
|
||||
You can set OLLAMA_LLM_LIBRARY to any of the available LLM libraries to limit autodetection, so for example, if you have both CUDA and AMD GPUs, but want to force the CUDA v13 only, use:
|
||||
You can set OLLAMA_LLM_LIBRARY to any of the available LLM libraries to bypass autodetection, so for example, if you have a CUDA card, but want to force the CPU LLM library with AVX2 vector support, use:
|
||||
|
||||
```shell
|
||||
OLLAMA_LLM_LIBRARY="cuda_v13" ollama serve
|
||||
OLLAMA_LLM_LIBRARY="cpu_avx2" ollama serve
|
||||
```
|
||||
|
||||
You can see what features your CPU has with the following.
|
||||
|
||||
```shell
|
||||
cat /proc/cpuinfo| grep flags | head -1
|
||||
```
|
||||
|
||||
## Installing older or pre-release versions on Linux
|
||||
@@ -56,6 +73,10 @@ If you run into problems on Linux and want to install an older version, or you'd
|
||||
curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.5.7 sh
|
||||
```
|
||||
|
||||
## Linux tmp noexec
|
||||
|
||||
If your system is configured with the "noexec" flag where Ollama stores its temporary executable files, you can specify an alternate location by setting OLLAMA_TMPDIR to a location writable by the user ollama runs as. For example OLLAMA_TMPDIR=/usr/share/ollama/
|
||||
|
||||
## Linux docker
|
||||
|
||||
If Ollama initially works on the GPU in a docker container, but then switches to running on CPU after some period of time with errors in the server log reporting GPU discovery failures, this can be resolved by disabling systemd cgroup management in Docker. Edit `/etc/docker/daemon.json` on the host and add `"exec-opts": ["native.cgroupdriver=cgroupfs"]` to the docker configuration.
|
||||
@@ -77,13 +98,10 @@ Sometimes the Ollama can have difficulties initializing the GPU. When you check
|
||||
- Make sure you're running the latest nvidia drivers
|
||||
|
||||
If none of those resolve the problem, gather additional information and file an issue:
|
||||
|
||||
- Set `CUDA_ERROR_LEVEL=50` and try again to get more diagnostic logs
|
||||
- Check dmesg for any errors `sudo dmesg | grep -i nvrm` and `sudo dmesg | grep -i nvidia`
|
||||
|
||||
You may get more details for initialization failures by enabling debug prints in the uvm driver. You should only use this temporarily while troubleshooting
|
||||
- `sudo rmmod nvidia_uvm` then `sudo modprobe nvidia_uvm uvm_debug_prints=1`
|
||||
|
||||
|
||||
## AMD GPU Discovery
|
||||
|
||||
On linux, AMD GPU access typically requires `video` and/or `render` group membership to access the `/dev/kfd` device. If permissions are not set up correctly, Ollama will detect this and report an error in the server log.
|
||||
@@ -91,6 +109,7 @@ On linux, AMD GPU access typically requires `video` and/or `render` group member
|
||||
When running in a container, in some Linux distributions and container runtimes, the ollama process may be unable to access the GPU. Use `ls -lnd /dev/kfd /dev/dri /dev/dri/*` on the host system to determine the **numeric** group IDs on your system, and pass additional `--group-add ...` arguments to the container so it can access the required devices. For example, in the following output `crw-rw---- 1 0 44 226, 0 Sep 16 16:55 /dev/dri/card0` the group ID column is `44`
|
||||
|
||||
If you are experiencing problems getting Ollama to correctly discover or use your GPU for inference, the following may help isolate the failure.
|
||||
|
||||
- `AMD_LOG_LEVEL=3` Enable info log levels in the AMD HIP/ROCm libraries. This can help show more detailed error codes that can help troubleshoot problems
|
||||
- `OLLAMA_DEBUG=1` During GPU discovery additional information will be reported
|
||||
- Check dmesg for any errors from amdgpu or kfd drivers `sudo dmesg | grep -i amdgpu` and `sudo dmesg | grep -i kfd`
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
# Ollama Windows
|
||||
---
|
||||
title: Windows
|
||||
---
|
||||
|
||||
Welcome to Ollama for Windows.
|
||||
|
||||
@@ -7,14 +9,14 @@ No more WSL required!
|
||||
Ollama now runs as a native Windows application, including NVIDIA and AMD Radeon GPU support.
|
||||
After installing Ollama for Windows, Ollama will run in the background and
|
||||
the `ollama` command line is available in `cmd`, `powershell` or your favorite
|
||||
terminal application. As usual the Ollama [api](./api.md) will be served on
|
||||
terminal application. As usual the Ollama [API](/api) will be served on
|
||||
`http://localhost:11434`.
|
||||
|
||||
## System Requirements
|
||||
|
||||
* Windows 10 22H2 or newer, Home or Pro
|
||||
* NVIDIA 452.39 or newer Drivers if you have an NVIDIA card
|
||||
* AMD Radeon Driver https://www.amd.com/en/support if you have a Radeon card
|
||||
- Windows 10 22H2 or newer, Home or Pro
|
||||
- NVIDIA 452.39 or newer Drivers if you have an NVIDIA card
|
||||
- AMD Radeon Driver https://www.amd.com/en/support if you have a Radeon card
|
||||
|
||||
Ollama uses unicode characters for progress indication, which may render as unknown squares in some older terminal fonts in Windows 10. If you see this, try changing your terminal font settings.
|
||||
|
||||
@@ -30,6 +32,20 @@ To install the Ollama application in a location different than your home directo
|
||||
OllamaSetup.exe /DIR="d:\some\location"
|
||||
```
|
||||
|
||||
### Changing Model Location
|
||||
|
||||
To change where Ollama stores the downloaded models instead of using your home directory, set the environment variable `OLLAMA_MODELS` in your user account.
|
||||
|
||||
1. Start the Settings (Windows 11) or Control Panel (Windows 10) application and search for _environment variables_.
|
||||
|
||||
2. Click on _Edit environment variables for your account_.
|
||||
|
||||
3. Edit or create a new variable for your user account for `OLLAMA_MODELS` where you want the models stored
|
||||
|
||||
4. Click OK/Apply to save.
|
||||
|
||||
If Ollama is already running, Quit the tray application and relaunch it from the Start menu, or a new terminal started after you saved the environment variables.
|
||||
|
||||
## API Access
|
||||
|
||||
Here's a quick example showing API access from `powershell`
|
||||
@@ -42,20 +58,22 @@ Here's a quick example showing API access from `powershell`
|
||||
|
||||
Ollama on Windows stores files in a few different locations. You can view them in
|
||||
the explorer window by hitting `<Ctrl>+R` and type in:
|
||||
|
||||
- `explorer %LOCALAPPDATA%\Ollama` contains logs, and downloaded updates
|
||||
- *app.log* contains most resent logs from the GUI application
|
||||
- *server.log* contains the most recent server logs
|
||||
- *upgrade.log* contains log output for upgrades
|
||||
- _app.log_ contains most resent logs from the GUI application
|
||||
- _server.log_ contains the most recent server logs
|
||||
- _upgrade.log_ contains log output for upgrades
|
||||
- `explorer %LOCALAPPDATA%\Programs\Ollama` contains the binaries (The installer adds this to your user PATH)
|
||||
- `explorer %HOMEPATH%\.ollama` contains models and configuration
|
||||
- `explorer %TEMP%` contains temporary executable files in one or more `ollama*` directories
|
||||
|
||||
## Uninstall
|
||||
|
||||
The Ollama Windows installer registers an Uninstaller application. Under `Add or remove programs` in Windows Settings, you can uninstall Ollama.
|
||||
|
||||
> [!NOTE]
|
||||
> If you have [changed the OLLAMA_MODELS location](#changing-model-location), the installer will not remove your downloaded models
|
||||
|
||||
<Note>
|
||||
If you have [changed the OLLAMA_MODELS location](#changing-model-location), the installer will not remove your downloaded models
|
||||
</Note>
|
||||
|
||||
## Standalone CLI
|
||||
|
||||
@@ -68,9 +86,10 @@ If you'd like to install or integrate Ollama as a service, a standalone
|
||||
`ollama-windows-amd64.zip` zip file is available containing only the Ollama CLI
|
||||
and GPU library dependencies for Nvidia. If you have an AMD GPU, also download
|
||||
and extract the additional ROCm package `ollama-windows-amd64-rocm.zip` into the
|
||||
same directory. Both zip files are necessary for a complete AMD installation.
|
||||
This allows for embedding Ollama in existing applications, or running it as a
|
||||
system service via `ollama serve` with tools such as [NSSM](https://nssm.cc/).
|
||||
same directory. This allows for embedding Ollama in existing applications, or
|
||||
running it as a system service via `ollama serve` with tools such as
|
||||
[NSSM](https://nssm.cc/).
|
||||
|
||||
> [!NOTE]
|
||||
> If you are upgrading from a prior version, you should remove the old directories first.
|
||||
<Note>
|
||||
If you are upgrading from a prior version, you should remove the old directories first.
|
||||
</Note>
|
||||
|
||||