Commit Graph

470 Commits

Author SHA1 Message Date
1775647f76 continue conversation
feed responses back into the llm
2023-07-13 17:13:00 -07:00
77dc1a6d74 Merge pull request #74 from jmorganca/timings
Timings
v0.0.5
2023-07-13 10:17:13 -07:00
05e08d2310 return more info in generate response 2023-07-13 09:37:32 -07:00
31590284a7 fix route 2023-07-12 19:21:49 -07:00
f2863cc7f8 Merge pull request #76 from jmorganca/fix-pull
fix pull race
2023-07-12 19:21:13 -07:00
4dd296e155 build app in publish script 2023-07-12 19:16:39 -07:00
304f419429 update README.md API reference 2023-07-12 19:16:28 -07:00
2666d3c206 fix pull race 2023-07-12 19:07:23 -07:00
787d965331 web: disable signup button while submitting v0.0.4 2023-07-12 17:32:27 -07:00
e6eee0732c web: fix npm build 2023-07-12 17:28:00 -07:00
4c2b4589ac web: newsletter signup on download page 2023-07-12 17:26:20 -07:00
5571ed5248 Merge pull request #73 from jmorganca/generate-eof
fix eof error in generate
2023-07-12 11:09:23 -07:00
0944b01e7d pull fixes 2023-07-12 09:55:07 -07:00
5028de2901 update vicuna model 2023-07-12 09:42:26 -07:00
e1f0a0dc74 fix eof error in generate 2023-07-12 09:36:16 -07:00
b227261f21 Merge pull request #71 from jmorganca/llama-errors
error checking new model
2023-07-12 09:20:33 -07:00
c63f811909 return error if model fails to load 2023-07-11 20:32:26 -07:00
7c71c10d4f fix compilation issue in Dockerfile, remove from README.md until ready 2023-07-11 19:51:08 -07:00
c5f7eadd87 error checking new model 2023-07-11 17:07:41 -07:00
dcb6ba389a app: trim server lines before logging v0.0.3 2023-07-11 16:43:19 -07:00
ed6abba75a app: bundle real ggml-metal.metal instead of symlink 2023-07-11 16:36:39 -07:00
b52a400cdf use go build on publish 2023-07-11 16:17:45 -07:00
2ed26f0047 tweak logging 2023-07-11 16:16:38 -07:00
e64ef69e34 look for ggml-metal in the same directory as the binary 2023-07-11 15:58:56 -07:00
3d0a9b477b log to console as well as file 2023-07-11 15:52:22 -07:00
7226980fb6 Merge pull request #70 from jmorganca/offline-fixes
offline fixes
2023-07-11 15:50:19 -07:00
a806b03f62 no errgroup 2023-07-11 14:58:10 -07:00
948323fa78 rename partial file 2023-07-11 13:50:26 -07:00
e243329e2e check api status 2023-07-11 13:42:05 -07:00
2a66a1164a common stream producer 2023-07-11 13:42:05 -07:00
62620914e9 Merge pull request #65 from jmorganca/bindings
call llama.cpp directly from go
2023-07-11 12:01:03 -07:00
442dec1c6f vendor llama.cpp 2023-07-11 11:59:18 -07:00
fd4792ec56 call llama.cpp directly from go 2023-07-11 11:59:18 -07:00
abaf7d3bda Merge pull request #67 from jmorganca/log
writing logs to `./ollama/logs`
2023-07-11 14:45:21 -04:00
7762584fb1 address comments 2023-07-11 14:38:38 -04:00
317615fd5c web: remove unused code 2023-07-11 11:05:45 -07:00
acc31427dd add logs to ~/.ollama/logs folder 2023-07-11 13:33:32 -04:00
a3ec1ec2a0 consistent error handling for pull and generate v0.0.2 2023-07-10 21:34:15 -07:00
407a5cabf4 when app is running, server restarts when it exits or disconnects 2023-07-10 17:14:25 -04:00
0859d50942 Merge pull request #58 from jmorganca/generate-errors
return error in generate response
2023-07-10 14:03:47 -07:00
66bbf05918 start server in both dev and when packaged 2023-07-10 13:46:31 -07:00
edba935d67 return error in generate response 2023-07-10 13:30:10 -07:00
2d49197b3b increase default model size to 512 2023-07-10 21:24:41 +02:00
f5e2e150b8 allow overriding default generate options 2023-07-10 20:58:02 +02:00
268e362fa7 fix binding build 2023-07-10 11:33:43 -07:00
07a4c1e3fb take all args as one prompt 2023-07-10 06:05:09 -04:00
20dae6b38f add vercel.json to silence PR comments 2023-07-09 20:11:37 -07:00
a18e6b3a40 llama: remove unnecessary std::vector 2023-07-09 10:51:45 -04:00
5fb96255dc llama: remove unused helper functions 2023-07-09 10:25:07 -04:00
b43ddd84be update README.md instructions section 2023-07-08 19:19:31 -04:00