Add multi-provider LLM chat design document

Extends the LLM chat proposal with support for configurable API
providers. Design covers:

**Provider Support:**
- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude 3 Opus/Sonnet/Haiku)
- OpenRouter (unified multi-provider access)
- Ollama (local models - Llama, Mistral, etc.)
- Custom OpenAI-compatible endpoints

**Architecture:**
- Provider adapter pattern (similar to chat protocol adapters)
- Base LLMProviderAdapter interface
- Streaming abstraction across different APIs
- Unified message format conversion
- Cost tracking and token usage per provider

**Features:**
- Provider selection dropdown in config panel
- API key management with basic encryption
- Model list fetching per provider
- Connection testing with visual feedback
- Cost calculation and display (provider-specific pricing)
- Base URL override for custom endpoints
- Local model support (Ollama - zero cost, privacy)

**Implementation:**
- Complete TypeScript interfaces and types
- Full provider implementations (OpenAI, Anthropic, Ollama)
- Provider registry for dynamic loading
- Enhanced config panel component
- API key secure storage utilities
- Cost tracking and formatting helpers

This makes the LLM chat much more flexible and powerful compared to
a single-provider implementation. Users can switch between cloud
providers or use local models based on their needs.
This commit is contained in:
Claude
2026-01-15 21:56:04 +00:00
parent 1cb6fdaae6
commit b4df37574b

File diff suppressed because it is too large Load Diff