mirror of
https://github.com/ollama/ollama.git
synced 2025-04-01 00:19:43 +02:00
At least with the ROCm libraries, its possible to have the library present with zero GPUs. This fix avoids a divide by zero bug in llm.go when we try to calculate GPU memory with zero GPUs.