Logo
Explore Help
Register Sign In
highperfocused/ollama
1
0
Fork 0
You've already forked ollama
mirror of https://github.com/ollama/ollama.git synced 2025-07-29 13:53:05 +02:00
Code Issues Packages Projects Releases Wiki Activity
Files
45cacbaf0568a4d38d74ebdd0957fe01bd06719d
ollama/llm
History
Daniel Hiltgen 17df6520c8 Remove mmap related output calc logic
2024-06-14 14:55:50 -07:00
..
ext_server
Fix server.cpp for the new cuda build macros
2024-06-14 14:51:40 -07:00
generate
Add ability to skip oneapi generate
2024-06-07 08:32:49 -07:00
llama.cpp @ 5921b8f089
Update llama.cpp submodule to 5921b8f0 (#4731)
2024-05-30 16:20:22 -07:00
patches
llm: patch to fix qwen 2 temporarily on nvidia (#4897)
2024-06-06 23:14:33 -07:00
filetype.go
Add support for IQ1_S, IQ3_S, IQ2_S, IQ4_XS. IQ4_NL (#4322)
2024-05-23 13:21:49 -07:00
ggla.go
simplify safetensors reading
2024-05-21 11:28:22 -07:00
ggml.go
Improve multi-gpu handling at the limit
2024-06-14 14:51:40 -07:00
gguf.go
Revert "Merge pull request #4938 from ollama/mxyng/fix-byte-order"
2024-06-11 15:56:17 -07:00
llm_darwin_amd64.go
…
llm_darwin_arm64.go
…
llm_linux.go
…
llm_windows.go
…
llm.go
revert tokenize ffi (#4761)
2024-05-31 18:54:21 -07:00
memory_test.go
review comments and coverage
2024-06-14 14:55:50 -07:00
memory.go
Remove mmap related output calc logic
2024-06-14 14:55:50 -07:00
payload.go
review comments and coverage
2024-06-14 14:55:50 -07:00
server.go
review comments and coverage
2024-06-14 14:55:50 -07:00
status.go
…
Powered by Gitea Version: 1.24.3 Page: 1219ms Template: 37ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API