Extends the LLM chat proposal with support for configurable API
providers. Design covers:
**Provider Support:**
- OpenAI (GPT-4, GPT-3.5)
- Anthropic (Claude 3 Opus/Sonnet/Haiku)
- OpenRouter (unified multi-provider access)
- Ollama (local models - Llama, Mistral, etc.)
- Custom OpenAI-compatible endpoints
**Architecture:**
- Provider adapter pattern (similar to chat protocol adapters)
- Base LLMProviderAdapter interface
- Streaming abstraction across different APIs
- Unified message format conversion
- Cost tracking and token usage per provider
**Features:**
- Provider selection dropdown in config panel
- API key management with basic encryption
- Model list fetching per provider
- Connection testing with visual feedback
- Cost calculation and display (provider-specific pricing)
- Base URL override for custom endpoints
- Local model support (Ollama - zero cost, privacy)
**Implementation:**
- Complete TypeScript interfaces and types
- Full provider implementations (OpenAI, Anthropic, Ollama)
- Provider registry for dynamic loading
- Enhanced config panel component
- API key secure storage utilities
- Cost tracking and formatting helpers
This makes the LLM chat much more flexible and powerful compared to
a single-provider implementation. Users can switch between cloud
providers or use local models based on their needs.
Created comprehensive technical proposal for implementing LLM chat
functionality in Grimoire. Covers:
- Architecture design (separate from Nostr chat)
- Reusable components from existing ChatViewer
- Type system for LLM messages and conversations
- Streaming implementation with Anthropic SDK
- Component breakdown and implementation phases
- Benefits of Option B approach (separate but strategic extraction)
This addresses the review of chat interface genericness and proposes
a pragmatic path forward for supporting LLM conversations alongside
Nostr chat protocols.