* Add multi text array field * Add multiple values to model configuration for a custom LLM provider * Fix reference to old field name * Add migration * Update all instances of model_names / display_model_names to use new schema migration * Update background task * Update endpoints to not throw errors * Add test * Update backend/alembic/versions/7a70b7664e37_add_models_configuration_table.py Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * Update backend/onyx/background/celery/tasks/llm_model_update/tasks.py Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * Fix list comprehension nits * Update web/src/components/admin/connectors/Field.tsx Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * Update web/src/app/admin/configuration/llm/interfaces.ts Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * Implement greptile recommendations * Update backend/onyx/db/llm.py Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * Update backend/onyx/server/manage/llm/api.py Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * Update backend/onyx/background/celery/tasks/llm_model_update/tasks.py Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * Update backend/onyx/db/llm.py Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * Fix more greptile suggestions * Run formatter again * Update backend/onyx/db/models.py Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * Add relationship to `LLMProvider` and `ModelConfigurations` classes * Use sqlalchemy ORM relationships instead of manually populating fields * Upgrade migration * Update interface * Remove all instances of model_names and display_model_names from backend * Add more tests and fix bugs * Run prettier * Add types * Update migration to perform data transformation * Ensure native llm providers don't have custom max input tokens * Start updating frontend logic to support custom max input tokens * Pass max input tokens to LLM class (to be passed into `litellm.completion` call later) * Add ModelConfigurationField component for custom llm providers * Edit spacing and styling of model configuration matrix * Fix error message displaying bug * Edit opacity of `FiX` field for first index * Change opacity back * Change roundness * Address comments on PR * Perform fetching of `max_input_tokens` at the beginning of the callgraph and rope it throughout the entire callstack * Change `add` to `execute` * Move `max_input_tokens` into `LLMConfig` * Fix bug with error messages not being cleared * Change field used to fetch LLMProvider * Fix model-configuration UI * Address comments * Remove circular import * Fix failing tests in GH * Fix failing tests * Use `isSubset` instead of equality to determine native vs custom LLM Provider * Remove unused import * Make responses always display max_input_tokens * Fix api endpoint to hit * Update types in web application * Update object field * Fix more type errors * Fix failing llm provider tests --------- Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Open Source Gen-AI + Enterprise Search.
Onyx (formerly Danswer) is the AI platform connected to your company's docs, apps, and people. Onyx provides a feature rich Chat interface and plugs into any LLM of your choice. Keep knowledge and access controls sync-ed across over 40 connectors like Google Drive, Slack, Confluence, Salesforce, etc. Create custom AI agents with unique prompts, knowledge, and actions that the agents can take. Onyx can be deployed securely anywhere and for any scale - on a laptop, on-premise, or to cloud.
Feature Highlights
Deep research over your team's knowledge:
Use Onyx as a secure AI Chat with any LLM:
Easily set up connectors to your apps:
Access Onyx where your team already works:
Deployment
To try it out for free and get started in seconds, check out Onyx Cloud.
Onyx can also be run locally (even on a laptop) or deployed on a virtual machine with a single
docker compose
command. Checkout our docs to learn more.
We also have built-in support for high-availability/scalable deployment on Kubernetes. References here.
🔍 Other Notable Benefits of Onyx
- Custom deep learning models for indexing and inference time, only through Onyx + learning from user feedback.
- Flexible security features like SSO (OIDC/SAML/OAuth2), RBAC, encryption of credentials, etc.
- Knowledge curation features like document-sets, query history, usage analytics, etc.
- Scalable deployment options tested up to many tens of thousands users and hundreds of millions of documents.
🚧 Roadmap
- New methods in information retrieval (StructRAG, LightGraphRAG, etc.)
- Personalized Search
- Organizational understanding and ability to locate and suggest experts from your team.
- Code Search
- SQL and Structured Query Language
🔌 Connectors
Keep knowledge and access up to sync across 40+ connectors:
- Google Drive
- Confluence
- Slack
- Gmail
- Salesforce
- Microsoft Sharepoint
- Github
- Jira
- Zendesk
- Gong
- Microsoft Teams
- Dropbox
- Local Files
- Websites
- And more ...
See the full list here.
📚 Licensing
There are two editions of Onyx:
- Onyx Community Edition (CE) is available freely under the MIT Expat license. Simply follow the Deployment guide above.
- Onyx Enterprise Edition (EE) includes extra features that are primarily useful for larger organizations. For feature details, check out our website.
To try the Onyx Enterprise Edition:
- Checkout Onyx Cloud.
- For self-hosting the Enterprise Edition, contact us at founders@onyx.app or book a call with us on our Cal.
💡 Contributing
Looking to contribute? Please check out the Contribution Guide for more details.