Commit Graph

429 Commits

Author SHA1 Message Date
df06812494 Update api.md 2023-12-20 08:47:53 -05:00
1b991d0ba9 Refine build to support CPU only
If someone checks out the ollama repo and doesn't install the CUDA
library, this will ensure they can build a CPU only version
2023-12-19 09:05:46 -08:00
811b1f03c8 deprecate ggml
- remove ggml runner
- automatically pull gguf models when ggml detected
- tell users to update to gguf in the case automatic pull fails

Co-Authored-By: Jeffrey Morgan <jmorganca@gmail.com>
2023-12-19 09:05:46 -08:00
6e16098a60 remove sample_count from docs (#1527)
this info has not been returned from these endpoints in some time
2023-12-14 17:49:00 -05:00
fedba24a63 Docs for multimodal support (#1485)
* add multimodal docs

* add chat api docs

* consistency between `/api/generate` and `/api/chat`

* simplify docs
2023-12-13 13:59:33 -05:00
e3b090dbc5 Added message format for chat api (#1488) 2023-12-13 11:21:23 -05:00
0a9d348023 Fix issues with /set template and /set system (#1486) 2023-12-12 14:43:19 -05:00
910e9401d0 Multimodal support (#1216)
---------

Co-authored-by: Matt Apperson <mattapperson@Matts-MacBook-Pro.local>
2023-12-11 13:56:22 -08:00
5d4d2e2c60 update docs with chat completion api 2023-12-10 13:53:36 -05:00
32064a0646 fix empty response when receiving runner error 2023-12-10 10:53:38 -05:00
b74580c913 Update api.md 2023-12-08 16:02:07 -08:00
2a2289fb6b Update api.md 2023-12-08 09:36:45 -08:00
ba264e9da8 add future version note to chat api docs 2023-12-07 09:42:15 -08:00
f9b7d65e2b docs/tutorials: add bit on how to use Fly GPUs on-demand with Ollama (#1406)
Signed-off-by: Xe Iaso <xe@camellia.finch-kitefin.ts.net>
2023-12-06 14:14:02 -08:00
13524b5e72 List "Send chat messages" in table of contents (#1399)
Thank you @calderonsamuel
2023-12-06 12:34:27 -08:00
97c5696945 fix base urls in chat examples 2023-12-06 12:10:20 -08:00
195e3d9dbd chat api endpoint (#1392) 2023-12-05 14:57:33 -05:00
00d06619a1 Revert "chat api (#991)" while context variable is fixed
This reverts commit 7a0899d62d.
2023-12-04 21:16:27 -08:00
f1ef3f9947 remove mention of gpt-neox in import (#1381)
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-12-04 20:58:10 -08:00
7a0899d62d chat api (#991)
- update chat docs
- add messages chat endpoint
- remove deprecated context and template generate parameters from docs
- context and template are still supported for the time being and will continue to work as expected
- add partial response to chat history
2023-12-04 18:01:06 -05:00
7eda3d0c55 Corrected transposed 129 to 192 for OLLAMA_ORIGINS example (#1325) 2023-11-29 22:44:17 -05:00
91897a606f Add OllamaEmbeddings to python LangChain example (#994)
* Add OllamaEmbeddings to python LangChain example

* typo

---------

Co-authored-by: Alec Hammond <alechammond@fb.com>
2023-11-29 16:25:39 -05:00
63097607b2 Correct MacOS Host port example (#1301) 2023-11-29 11:44:03 -05:00
e1a69d44c9 Update faq.md (#1299)
Fix a typo in the CA update command
2023-11-28 09:54:42 -05:00
2eaa95b417 Update api.md 2023-11-21 15:32:05 -05:00
f24741ff39 Documenting how to view Modelfiles (#723)
* Documented viewing Modelfiles in ollama.ai/library

* Moved Modelfile in ollama.ai down per request
2023-11-20 15:24:29 -05:00
1657c6abc7 add note to specify JSON in the prompt when using JSON mode 2023-11-18 22:59:26 -05:00
c82ead4d01 faq: fix heading and add more details 2023-11-17 09:02:17 -08:00
90860b6a7e update faq (#1176) 2023-11-17 11:42:58 -05:00
81092147c4 remove unnecessary -X POST from example curl commands 2023-11-17 09:50:38 -05:00
92656a74b7 Use llama2 as the model in api.md 2023-11-17 07:17:51 -05:00
d8842b4d4b update faq 2023-11-16 17:07:36 -08:00
c13bde962d Update docs/faq.md
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com>
2023-11-16 16:48:38 -08:00
ee307937fd update faq 2023-11-16 16:46:43 -08:00
b5f158f046 add faq for proxies (#1147) 2023-11-16 11:43:37 -05:00
77954bea0e Merge pull request #898 from jmorganca/mxyng/build-context
create remote models
2023-11-15 16:41:12 -08:00
54f92f01cb update docs 2023-11-15 15:28:15 -08:00
ecd71347ab Update faq.md 2023-11-15 18:17:13 -05:00
8ee4cbea0f Remove table of contents in faq.md 2023-11-15 18:16:27 -05:00
71d71d0988 update docs 2023-11-15 15:16:23 -08:00
cac11c9137 update api docs 2023-11-15 15:16:23 -08:00
f61f340279 FAQ: answer a few faq questions (#1128)
* faq: does ollama share my prompts

Signed-off-by: Matt Williams <m@technovangelist.com>

* faq: ollama and openai

Signed-off-by: Matt Williams <m@technovangelist.com>

* faq: vscode plugins

Signed-off-by: Matt Williams <m@technovangelist.com>

* faq: send a doc to Ollama

Signed-off-by: Matt Williams <m@technovangelist.com>

* extra spacing

Signed-off-by: Matt Williams <m@technovangelist.com>

* Update faq.md

* Update faq.md

---------

Signed-off-by: Matt Williams <m@technovangelist.com>
Co-authored-by: Michael <mchiang0610@users.noreply.github.com>
2023-11-15 18:05:13 -05:00
85951d25ef Created tutorial for running Ollama on NVIDIA Jetson devices (#1098) 2023-11-15 12:32:37 -05:00
df18486c35 Move /generate format to optional parameters (#1127)
This field is optional and should be under the `Advanced parameters` header
2023-11-14 16:12:30 -05:00
5cba29b9d6 JSON mode: add `"format" as an api parameter (#1051)
* add `"format": "json"` as an API parameter
---------
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2023-11-09 16:44:02 -08:00
5b39503bcd document specifying multiple stop params (#1061) 2023-11-09 13:16:26 -08:00
dd3dc47ddb Merge pull request #992 from aashish2057/aashish2057/langchainjs_doc_update 2023-11-09 05:08:31 -08:00
a49d6acc1e add a complete /generate options example (#1035) 2023-11-08 16:44:36 -08:00
ec2a31e9b3 support raw generation requests (#952)
- add the optional `raw` generate request parameter to bypass prompt formatting and response context
-add raw request to docs
2023-11-08 14:05:02 -08:00
1d155caba3 docs: clarify where the models are stored in the faq
Signed-off-by: Matt Williams <m@technovangelist.com>
2023-11-06 14:38:49 -08:00