llm
Multi-provider LLM client layer supporting OpenRouter, Azure OpenAI, direct OpenAI chat, direct Google Gemini chat, and direct Anthropic Claude chat with automatic failover. Includes model catalog management, pricing metadata, and preference-based model selection.
Used by cmd/worker, cmd/ui, cmd/run, cmd/seed-registry.
Usage
import "cruvero/internal/llm"
Key Types / Interfaces
| Type | Source | Description |
|---|---|---|
Client | client.go | Interface: Chat and ChatWithModel methods |
MultiClient | client.go | Multi-provider client routing to OpenRouter, Azure, OpenAI, Google, or Anthropic |
Message | openrouter.go | Chat message with role and content |
ChatResult | openrouter.go | LLM response with content and usage metrics |
Usage | openrouter.go | Token counts (prompt, completion, total) and cost |
FailoverChain | failover.go | Provider failover chain with health tracking and recovery |
FailoverEvent | failover.go | Event emitted during provider failover |
FailoverOptions | failover.go | Failover configuration (threshold, recovery interval, latency) |
ModelInfo | models.go | Model metadata: architecture, pricing, context size |
ModelStore | models.go | Interface for persisting model information |
PostgresModelStore | models.go | PostgreSQL-backed model store |
Key Files
| File | Purpose |
|---|---|
client.go | Client interface and MultiClient (provider routing) |
openrouter.go | OpenRouter API client |
azure.go | Azure OpenAI API client |
openai_chat.go | Direct OpenAI Chat Completions API client |
google.go | Direct Google Gemini GenerateContent API client |
anthropic.go | Direct Anthropic Claude Messages API client |
failover.go | Failover chain with circuit breaker and recovery |
models.go | Model catalog and preference management |