1) Ollama natively supports frequency_penalty, so this is not necessary.
2) Repeat_penalty is being added to Open WebUI in PR #10016, allowing Ollama users to pick which penalty methods they want.
1) Ollama supports sending the system prompt as a parameter, not as an option. (See https://github.com/ollama/ollama/blob/main/docs/api.md#request-8) However, it is in the options dictionary and needs moved to the payload dictionary.
2) After moving the system parameter from ollama_options to ollama_payload, delete it from ollama_options. This is to prevent Ollama throwing a warning about invalid options.
1) max_tokens was being looked for in openai_payload, but is present in openai_payload['options'], so is never found.
2) After copying the value for max_tokens to num_predict, delete max_tokens from the dictionary. This is to prevent Ollama throwing a warning about invalid option (max_tokens)
1) This may be legacy code?
2) All three of these parameters, temperature, top_p and seed, are found in openai_payload["options"], not openai_payload. They do not need remapped any longer.
1) max_completion_tokens is being looked for in openai_payload, but would be located in openai_payload['options'], so is never found. (This applies to the prior two commits as well).
2) max_completion_tokens is not sent from the frontend, only max_tokens. It does not appear in AdvancedParams.svelte.
2b) Openai.py does use max_completion_tokens, but for o1,o3 models and converts it from max_tokens.
1) Ollama natively supports frequency_penalty.
2) repeat_penaltywas added to Open Webui in PR #10016 and is not merged to main yet at this time. Once both changes go live, Ollama users can freely choose between frequency/presence penalty, or repeat penalty, as they choose.
This feature allows the authentication process to redirect to a
route passed in the querystring. This allows the /auth route
a means of bringing the user to an expected route instead of the
main page (root).
Adds support for Gemini API as an image generation backend. By setting the API Base URL to something like 'https://generativelanguage.googleapis.com/v1beta' and providing their API Key, users should be able to start generating images using models like 'imagen-3.0-generate-002'.