--- title: Quickstart --- This quickstart will walk your through running your first model with Ollama. To get started, download Ollama on macOS, Windows or Linux. Download Ollama ## Run a model Open a terminal and run the command: ``` ollama run gemma3 ``` ``` ollama pull gemma3 ``` Lastly, chat with the model: ```shell curl http://localhost:11434/api/chat -d '{ "model": "gemma3", "messages": [{ "role": "user", "content": "Hello there!" }], "stream": false }' ``` Start by downloading a model: ``` ollama pull gemma3 ``` Then install Ollama's Python library: ``` pip install ollama ``` Lastly, chat with the model: ```python from ollama import chat from ollama import ChatResponse response: ChatResponse = chat(model='gemma3', messages=[ { 'role': 'user', 'content': 'Why is the sky blue?', }, ]) print(response['message']['content']) # or access fields directly from the response object print(response.message.content) ``` Start by downloading a model: ``` ollama pull gemma3 ``` Then install the Ollama JavaScript library: ``` npm i ollama ``` Lastly, chat with the model: ```shell import ollama from 'ollama' const response = await ollama.chat({ model: 'gemma3', messages: [{ role: 'user', content: 'Why is the sky blue?' }], }) console.log(response.message.content) ``` See a full list of available models [here](https://ollama.com/models).