This website requires JavaScript.
Explore
Help
Register
Sign In
highperfocused
/
ollama
Watch
1
Star
0
Fork
0
You've already forked ollama
mirror of
https://github.com/ollama/ollama.git
synced
2025-07-29 18:24:42 +02:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
1ae84bc2a2ca4544e08d6ac65aa98a024f190313
ollama
/
llm
History
Bruce MacDonald
1ae84bc2a2
skip gpu if less than 2GB VRAM are available (
#1059
)
2023-11-09 13:16:16 -08:00
..
llama.cpp
restore building runner with
AVX
on by default (
#900
)
2023-10-27 12:13:44 -07:00
falcon.go
…
ggml.go
ggufv3
2023-10-23 09:35:49 -07:00
gguf.go
instead of static number of parameters for each model family, get the real number from the tensors (
#1022
)
2023-11-08 17:55:46 -08:00
llama.go
skip gpu if less than 2GB VRAM are available (
#1059
)
2023-11-09 13:16:16 -08:00
llm.go
default rope params to 0 for new models (
#968
)
2023-11-02 08:41:30 -07:00
starcoder.go
…
utils.go
…