mirror of
https://github.com/ollama/ollama.git
synced 2025-11-11 16:27:02 +01:00
76 lines
1.7 KiB
Plaintext
76 lines
1.7 KiB
Plaintext
---
|
|
title: Droid
|
|
---
|
|
|
|
|
|
## Install
|
|
|
|
Install the [Droid CLI](https://factory.ai/):
|
|
|
|
```bash
|
|
curl -fsSL https://app.factory.ai/cli | sh
|
|
```
|
|
|
|
<Note>Droid requires a larger context window. It is recommended to use a context window of at least 32K tokens. See [Context length](/context-length) for more information.</Note>
|
|
|
|
## Usage with Ollama
|
|
|
|
Add a local configuration block to `~/.factory/config.json`:
|
|
|
|
```json
|
|
{
|
|
"custom_models": [
|
|
{
|
|
"model_display_name": "qwen3-coder [Ollama]",
|
|
"model": "qwen3-coder",
|
|
"base_url": "http://localhost:11434/v1/",
|
|
"api_key": "not-needed",
|
|
"provider": "generic-chat-completion-api",
|
|
"max_tokens": 32000
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
|
|
## Cloud Models
|
|
`qwen3-coder:480b-cloud` is the recommended model for use with Droid.
|
|
|
|
Add the cloud configuration block to `~/.factory/config.json`:
|
|
|
|
```json
|
|
{
|
|
"custom_models": [
|
|
{
|
|
"model_display_name": "qwen3-coder [Ollama Cloud]",
|
|
"model": "qwen3-coder:480b-cloud",
|
|
"base_url": "http://localhost:11434/v1/",
|
|
"api_key": "not-needed",
|
|
"provider": "generic-chat-completion-api",
|
|
"max_tokens": 128000
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
## Connecting to ollama.com
|
|
|
|
1. Create an [API key](https://ollama.com/settings/keys) from ollama.com and export it as `OLLAMA_API_KEY`.
|
|
2. Add the cloud configuration block to `~/.factory/config.json`:
|
|
|
|
```json
|
|
{
|
|
"custom_models": [
|
|
{
|
|
"model_display_name": "qwen3-coder [Ollama Cloud]",
|
|
"model": "qwen3-coder:480b",
|
|
"base_url": "https://ollama.com/v1/",
|
|
"api_key": "OLLAMA_API_KEY",
|
|
"provider": "generic-chat-completion-api",
|
|
"max_tokens": 128000
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
Run `droid` in a new terminal to load the new settings. |