Fix embeddings memory corruption (#6467)

* Fix embeddings memory corruption

The patch was leading to a buffer overrun corruption.  Once removed though, parallism
in server.cpp lead to hitting an assert due to slot/seq IDs being >= token count.  To
work around this, only use slot 0 for embeddings.

* Fix embed integration test assumption

The token eval count has changed with recent llama.cpp bumps (0.3.5+)
This commit is contained in:
Daniel Hiltgen
2024-08-22 14:51:42 -07:00
committed by GitHub
parent 6bd8a4b0a1
commit 90ca84172c
4 changed files with 16 additions and 65 deletions

View File

@@ -1429,7 +1429,13 @@ struct llama_server_context
switch (task.type)
{
case TASK_TYPE_COMPLETION: {
server_slot *slot = prefix_slot(task.data["prompt"]);
server_slot *slot = nullptr;
if (task.embedding_mode) {
// Embedding seq_id (aka slot id) must always be <= token length, so always use slot 0
slot = slots[0].available() ? &slots[0] : nullptr;
} else {
slot = prefix_slot(task.data["prompt"]);
}
if (slot == nullptr)
{
// if no slot is available, we defer this task for processing later