This website requires JavaScript.
Explore
Help
Register
Sign In
highperfocused
/
ollama
Watch
1
Star
0
Fork
0
You've already forked ollama
mirror of
https://github.com/ollama/ollama.git
synced
2025-04-04 01:50:26 +02:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
ollama
/
llm
/
llama.cpp
History
Jeffrey Morgan
6b5bdfa6c9
update runner submodule
2023-12-18 17:33:46 -05:00
..
ggml
@
9e232f0234
subprocess llama.cpp server (
#401
)
2023-08-30 16:35:03 -04:00
gguf
@
a7aee47b98
update runner submodule
2023-12-18 17:33:46 -05:00
patches
update llama.cpp
2023-11-21 09:50:02 -08:00
generate_darwin_amd64.go
add back
f16c
instructions on intel mac
2023-11-26 15:59:49 -05:00
generate_darwin_arm64.go
update llama.cpp
2023-11-21 09:50:02 -08:00
generate_linux.go
Disable CUDA peer access as a workaround for multi-gpu inference bug (
#1261
)
2023-11-24 14:05:57 -05:00
generate_windows.go
windows CUDA support (
#1262
)
2023-11-24 17:16:36 -05:00