mirror of
https://github.com/ollama/ollama.git
synced 2025-03-22 15:52:18 +01:00
The encoder cache needs to know the position of images in the input stream so that it knows when to delete them. Previously images didn't have a position, so we implied one by breaking batches before an image and then assuming the image was in the first position. However, multimodal objects are now given explicit positions in the input stream, so we can use that instead. Breaking batches was also a way to simulate a cross attention mask for mllama. However, given that it only supports a single sequence and a single image, this mask doesn't serve any real purpose. Removing the batch break does not appear to affect the quality of the output. Most of this is simply moving the input data structures to a new package to avoid import cycles.
38 lines
1023 B
Go
38 lines
1023 B
Go
package input
|
|
|
|
// Input represents one token in the input stream
|
|
type Input struct {
|
|
// Token is a single element of text.
|
|
Token int32
|
|
|
|
// Multimodal is opaque data representing a non-text
|
|
// element such as an image (or part of one if the image
|
|
// can be processed in pieces). It may be either together
|
|
// with Token or on its own.
|
|
Multimodal any
|
|
|
|
// MultimodalHash is a unique representation of the data
|
|
// stored in Multimodal, used for caching and comparing
|
|
// equality.
|
|
MultimodalHash uint64
|
|
}
|
|
|
|
// MultimodalIndex is a multimodal element (such as an image)
|
|
// together with an index into the slice of Inputs with the
|
|
// corresponding token. Note that the index is not the same
|
|
// as the position - to find that use the index with the
|
|
// Positions slice.
|
|
type MultimodalIndex struct {
|
|
Index int
|
|
Multimodal any
|
|
}
|
|
|
|
// Options contains the inputs for a model forward pass
|
|
type Options struct {
|
|
Inputs []int32
|
|
Multimodal []MultimodalIndex
|
|
Positions []int32
|
|
Sequences []int
|
|
Outputs []int32
|
|
}
|