This website requires JavaScript.
Explore
Help
Register
Sign In
highperfocused
/
ollama
Watch
1
Star
0
Fork
0
You've already forked ollama
mirror of
https://github.com/ollama/ollama.git
synced
2025-03-20 23:02:48 +01:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
ollama
/
llm
History
Jeffrey Morgan
a3fcecf943
only set
main_gpu
if value > 0 is provided
2023-11-20 19:54:04 -05:00
..
llama.cpp
enable cpu instructions on intel macs
2023-11-19 23:20:26 -05:00
falcon.go
starcoder
2023-10-02 19:56:51 -07:00
ggml.go
ggufv3
2023-10-23 09:35:49 -07:00
gguf.go
instead of static number of parameters for each model family, get the real number from the tensors (
#1022
)
2023-11-08 17:55:46 -08:00
llama.go
only set
main_gpu
if value > 0 is provided
2023-11-20 19:54:04 -05:00
llm.go
recent llama.cpp update added kernels for fp32, q5_0, and q5_1
2023-11-20 13:44:31 -08:00
starcoder.go
starcoder
2023-10-02 19:56:51 -07:00
utils.go
partial decode ggml bin for more info
2023-08-10 09:23:10 -07:00