2a7553ce09
update llama.cpp submodule to c14f72d
v0.1.26
2024-02-21 09:03:14 -05:00
10af6070a9
Update big-AGI config file link ( #2626 )
...
Co-authored-by: bo.sun <bo.sun@cotticoffee.com >
2024-02-21 01:24:48 -05:00
92423b0600
add dist
directory in build_windows.ps
2024-02-21 00:05:05 -05:00
b3eac61cac
update llama.cpp submodule to f0d1fafc029a056cd765bdae58dcaa12312e9879
2024-02-20 22:56:51 -05:00
287ba11500
better error message when calling /api/generate
or /api/chat
with embedding models
2024-02-20 21:53:45 -05:00
63861f58cc
Support for bert
and nomic-bert
embedding models
2024-02-20 21:37:29 -05:00
f0425d3de9
Update faq.md
2024-02-20 20:44:45 -05:00
210b65268e
replace strings buffer with hasher ( #2437 )
...
the buffered value is going into the hasher eventually so write directly
to the hasher instead
2024-02-20 19:07:50 -05:00
949d7b1c48
add gguf file types ( #2532 )
2024-02-20 19:06:29 -05:00
897b213468
use http.DefaultClient ( #2530 )
...
default client already handles proxy
2024-02-20 18:34:47 -05:00
4613a080e7
update llama.cpp submodule to 66c1968f7
( #2618 )
2024-02-20 17:42:31 -05:00
ace2cdf1c6
Add Page Assist to the community integrations ( #2447 )
2024-02-20 14:03:58 -05:00
eed92bc19a
docs: add Msty app in readme ( #1775 )
...
* docs: add Msty app in readme
* docs: update msty url
2024-02-20 14:03:33 -05:00
e0a2f46466
Update README.md to include Elixir LangChain Library ( #2180 )
...
The Elixir LangChain Library now supports Ollama Chat with this [PR](https://github.com/brainlid/langchain/pull/70 )
2024-02-20 14:03:02 -05:00
01ff2e14db
[nit] Remove unused msg local var. ( #2511 )
2024-02-20 14:02:34 -05:00
199e79ec0c
docs: add tenere to terminal clients ( #2329 )
2024-02-19 23:13:03 -05:00
8125ce4cb6
Update import.md
...
Add instructions to get public key on windows
2024-02-19 22:48:24 -05:00
636d6eea99
Add ShellOracle to community terminal integrations ( #1767 )
2024-02-19 22:18:05 -05:00
df56f1ee5e
Update faq.md
2024-02-19 22:16:42 -05:00
0b6c6c9092
feat: add Helm Chart link to Package managers list ( #1673 )
2024-02-19 22:05:14 -05:00
cb60389de7
NextJS web interface for Ollama ( #2466 )
2024-02-19 21:57:36 -05:00
ce0c95d097
[fix] /bye and /exit are now treated as prefixes ( #2381 )
...
* [fix] /bye and /exit are now treated as prefixes
instead of being treated as entire lines which doesn't align with the way the rest of the commands are treated
* Update cmd/interactive.go
Fixing whitespace
---------
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com >
2024-02-19 21:56:49 -05:00
a9bc1e1c37
Add LangChain4J ( #2164 )
2024-02-19 21:17:32 -05:00
62c71f4cb1
add ollama-chat.nvim ( #2188 )
2024-02-19 21:14:29 -05:00
41aca5c2d0
Update faq.md
2024-02-19 21:11:01 -05:00
753724d867
Update api.md to include examples for reproducible outputs
2024-02-19 20:36:16 -05:00
e4576c2ee1
Update README.md
2024-02-19 20:15:24 -05:00
9a7a4b9533
add faqs for memory pre-loading and the keep_alive setting ( #2601 )
2024-02-19 14:45:25 -08:00
2653191222
Merge pull request #2600 from dhiltgen/refined_win_docs
...
Document setting server vars for windows
2024-02-19 13:46:37 -08:00
b338c0635f
Document setting server vars for windows
2024-02-19 13:30:46 -08:00
4fcbf1cde6
Merge pull request #2599 from dhiltgen/fix_avx
...
Explicitly disable AVX2 on GPU builds
2024-02-19 13:13:05 -08:00
9220b4fa91
Merge pull request #2585 from dhiltgen/cuda_leaks
...
Fix cuda leaks
2024-02-19 12:48:00 -08:00
fc39a6cd7a
Fix cuda leaks
...
This should resolve the problem where we don't fully unload from the GPU
when we go idle.
2024-02-18 18:37:20 -08:00
1e23e82324
Update Web UI link to new project name ( #2563 )
...
Ollama WebUI is now known as Open WebUI.
2024-02-17 20:05:20 -08:00
f9fd08040b
Merge pull request #2552 from dhiltgen/dup_update_menus
...
Fix duplicate menus on update and exit on signals
2024-02-16 17:23:37 -08:00
4318e35ee3
Merge pull request #2553 from dhiltgen/amdgpu_version
...
Harden AMD driver lookup logic
2024-02-16 17:23:12 -08:00
9754c6d9d8
Harden AMD driver lookup logic
...
It looks like the version file doesnt exist on older(?) drivers
2024-02-16 16:20:16 -08:00
a497235a55
Fix view logs menu
2024-02-16 15:42:53 -08:00
df6dc4fd96
Fix duplicate menus on update and exit on signals
...
Also fixes a few fit-and-finish items for better developer experience
2024-02-16 15:33:16 -08:00
88622847c6
fix: chat system prompting overrides ( #2542 )
2024-02-16 14:42:43 -05:00
9774663013
Update faq.md with the location of models on Windows ( #2545 )
2024-02-16 11:04:19 -08:00
a468ae0459
Merge pull request #2499 from ollama/windows-preview
...
Windows Preview
2024-02-15 16:06:32 -08:00
c3e62ba38a
Merge pull request #2516 from dhiltgen/single_tray_app
...
Fix a couple duplicate instance bugs
2024-02-15 15:52:43 -08:00
117369aa73
Exit if we detect another copy of Ollama running
2024-02-15 14:58:29 -08:00
1ba734de67
typo
2024-02-15 14:56:55 -08:00
5208cf09b1
clean up some logging
2024-02-15 14:56:55 -08:00
bb9de6037c
Prevent multiple installers running concurrently
2024-02-15 14:56:55 -08:00
272e53a1f5
Prepare to distribute standalone windows executable
...
This will be useful for our automated test riggig, and may be useful for
advanced users who want to "roll their own" system service
2024-02-15 14:56:55 -08:00
db2a9ad1fe
Explicitly disable AVX2 on GPU builds
...
Even though we weren't setting it to on, somewhere in the cmake config
it was getting toggled on. By explicitly setting it to off, we get `/arch:AVX`
as intended.
2024-02-15 14:50:11 -08:00
c9ab1aead3
Merge pull request #2526 from dhiltgen/harden_for_quotes
...
Harden the OLLAMA_HOST lookup for quotes
2024-02-15 14:13:40 -08:00