mirror of
https://github.com/ollama/ollama.git
synced 2025-04-19 09:01:24 +02:00
* remove c code * pack llama.cpp * use request context for llama_cpp * let llama_cpp decide the number of threads to use * stop llama runner when app stops * remove sample count and duration metrics * use go generate to get libraries * tmp dir for running llm
Desktop
This app builds upon Ollama to provide a desktop experience for running models.
Developing
First, build the ollama
binary:
cd ..
go build .
Then run the desktop app with npm start
:
cd app
npm install
npm start