3b49315f97
retry on unauthorized chunk push
...
The token printed for authorized requests has a lifetime of 1h. If an
upload exceeds 1h, a chunk push will fail since the token is created on
a "start upload" request.
This replaces the Pipe with SectionReader which is simpler and
implements Seek, a requirement for makeRequestWithRetry. This is
slightly worse than using a Pipe since the progress update is directly
tied to the chunk size instead of controlled separately.
2023-08-18 11:23:47 -07:00
5ca05c2e88
fix ModelType()
2023-08-18 11:23:38 -07:00
7eda70f23b
copy metadata from source
2023-08-17 21:55:25 -07:00
3d79b414d3
app: package ggml-metal.metal
from correct directory
2023-08-17 23:55:45 -04:00
c84bbf1dd6
Merge pull request #376 from jmorganca/mxyng/from-map-ignore-nil
...
ignore nil map values
v0.0.15
2023-08-17 15:57:12 -07:00
f723bf0879
ignore nil map values
2023-08-17 15:50:46 -07:00
cbf725a9ba
Merge pull request #375 from jmorganca/mxyng/fix-push
...
fix push manifest
2023-08-17 15:33:31 -07:00
086449b6c7
fmt
2023-08-17 15:32:31 -07:00
3cbc6a5c01
fix push manifest
2023-08-17 15:28:12 -07:00
54bb49a502
parse protocol for OLLAMA_HOST
2023-08-17 18:20:44 -04:00
cabaada956
Merge pull request #372 from jmorganca/mxyng/string-types
...
model and file type as strings
2023-08-17 15:10:59 -07:00
a894cc792d
model and file type as strings
2023-08-17 12:08:04 -07:00
519f4d98ef
add embed docs for modelfile
2023-08-17 13:37:42 -04:00
b963a83559
Merge pull request #364 from jmorganca/chunked-uploads
...
reimplement chunked uploads
2023-08-17 09:58:51 -07:00
bf6688abe6
Merge pull request #360 from jmorganca/fix-request-copies
...
Fix request copies
2023-08-17 09:58:42 -07:00
6005b157c2
retry download on network errors
2023-08-17 10:31:45 -04:00
14220d9833
set the scopes correctly ( #368 )
2023-08-16 21:42:02 -07:00
8ca50f24f3
fix nous-hermes model file size listing in readme ( #367 )
...
fix nous-hermes model file size listing in readme
2023-08-16 23:42:00 -04:00
c149fc3143
Update README.md
2023-08-16 22:54:55 -04:00
afbc763dac
adding link to models directly available on ollama ( #366 )
...
- adding link to models directly available on ollama
- ability to push your own models to the library will come in the future
2023-08-16 22:53:27 -04:00
5dfe91be8b
reimplement chunked uploads
2023-08-16 14:50:24 -07:00
9f944c00f1
push: retry on unauthorized
2023-08-16 11:35:33 -07:00
56e87cecb1
images: remove body copies
2023-08-16 10:30:41 -07:00
5ee6116420
set default OLLAMA_HOST
to
http://localhost:11434
2023-08-16 12:22:59 -04:00
5d9a4cd251
Merge pull request #348 from jmorganca/cross-repo-mount
...
cross repo blob mount
2023-08-16 09:20:36 -07:00
0ebec07569
Merge pull request #345 from jmorganca/exit-non-zero
...
set non-zero error code on error
2023-08-16 09:20:28 -07:00
08265515b3
Merge pull request #303 from jmorganca/matt/dockerit
...
DockerIt example
2023-08-16 08:04:34 -07:00
67e593e355
cmd: support OLLAMA_CLIENT_HOST environment variable ( #262 )
...
* cmd: support OLLAMA_HOST environment variable
This commit adds support for the OLLAMA_HOST environment
variable. This variable can be used to specify the host to which
the client should connect. This is useful when the client is
running somewhere other than the host where the server is running.
The new api.FromEnv function is used to read configure clients from the
environment. Clients wishing to use the environment variable being
consistent with the Ollama CLI can use this new function.
* Update api/client.go
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com >
* Update api/client.go
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com >
---------
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com >
2023-08-16 11:03:48 -04:00
d15c7622b9
Update orca
to orca-mini
in README.md
2023-08-15 21:10:28 -04:00
1deb35ca64
use loaded llm for generating model file embeddings
2023-08-15 16:12:02 -03:00
e2de886831
do not regenerate embeddings
2023-08-15 16:10:22 -03:00
f0d7c2f5ea
retry download on network errors
2023-08-15 15:07:19 -03:00
12052a7624
always remove from in progress map on download
2023-08-15 13:20:32 -03:00
23e1da778d
Add context to api docs
2023-08-15 11:43:22 -03:00
326de48930
use loaded llm for embeddings
2023-08-15 10:50:54 -03:00
18f2cb0472
dont log fatal
2023-08-15 10:39:59 -03:00
53bc36d207
Update modelfile.md
2023-08-15 09:23:36 -03:00
4dcf5c3e0b
Merge pull request #349 from jmorganca/close-files
...
close open files
2023-08-14 16:15:58 -07:00
d1b2f532b9
Merge pull request #350 from jmorganca/update-llama-cpp
...
update llama.cpp
2023-08-14 16:15:51 -07:00
e26085b921
close open files
2023-08-14 16:08:06 -07:00
f7b613332c
update llama.cpp
2023-08-14 15:47:00 -07:00
f594c8eb91
cross repo mount
2023-08-14 15:07:35 -07:00
76b85bc0e9
set non-zero error code on error
2023-08-14 14:09:58 -07:00
af98a1773f
update python example
2023-08-14 16:38:44 -03:00
9ae9a89883
Update modelfile.md
2023-08-14 16:26:53 -03:00
648f0974c6
python example
2023-08-14 15:27:13 -03:00
fc5230dffa
Add context to api docs
2023-08-14 15:23:24 -03:00
2ab20095b3
log embedding eval timing
2023-08-14 12:15:55 -04:00
f020e1d519
always remove from in progress map on download
2023-08-14 13:09:20 -03:00
4b2d366c37
Update llama.go
2023-08-14 12:55:50 -03:00