- Add proper Anthropic model aliases (claude-opus-4-1, claude-sonnet-4-0, etc.)
- Deduplicate models when same ID appears in both models.dev and OpenRouter
- models.dev takes priority over OpenRouter for duplicate IDs
- Fix test to use correct claude-3-5-haiku-latest alias
- Reduces Anthropic models from 11 to 10 (removed duplicate)
- Generated from OpenRouter API and models.dev
- Includes models from Google, OpenAI, Anthropic, xAI, Groq, Cerebras, OpenRouter
- Provides type-safe model selection with autocomplete
- Add Model interface to types.ts with normalized structure
- Create type-safe generic createLLM function with provider-specific model constraints
- Generate models from OpenRouter API and models.dev data
- Strip provider prefixes for direct providers (google, openai, anthropic, xai)
- Keep full model IDs for OpenRouter-proxied models
- Clean separation: types.ts (Model interface), models.ts (factory logic), models.generated.ts (data)
- Remove old model scripts and unused dependencies
- Rename GeminiLLM to GoogleLLM for consistency
- Add tests for new providers (xAI, Groq, Cerebras, OpenRouter)
- Support 181 tool-capable models across 7 providers with full type safety
- Switch from Node.js test runner to Vitest for better DX
- Add test suites for Grok, Groq, Cerebras, and OpenRouter providers
- Add Ollama test suite with automatic server lifecycle management
- Include thinking mode and multi-turn tests for all providers
- Remove example files (consolidated into test suite)
- Add VS Code test configuration
- Generate models.generated.ts from models.json with proper types
- Categorize providers: OpenAI (Responses), OpenAI-compatible, Anthropic, Gemini
- Create createLLM() factory with TypeScript overloads for type safety
- Auto-detect base URLs and environment variables for providers
- Support 353 models across 39 providers with full autocompletion
- Exclude generated file from git (rebuilt on npm build)
- Remove catch-all [key: string]: any from ModelInfo
- Make all required fields non-optional (attachment, reasoning, etc.)
- Add proper union types for modalities (text, image, audio, video, pdf)
- Mark only cost and knowledge fields as optional
- Export ModalityInput and ModalityOutput types
- Add models script to download latest model information
- Create models.ts module to query model capabilities
- Include models.json in package distribution
- Export utilities to check model features (reasoning, tools)
- Update build process to copy models.json to dist
- Add examples for Cerebras, Groq, Ollama, and OpenRouter
- Update OpenAI Completions provider to handle base URL properly
- Simplify README formatting
- All examples use the same OpenAICompletionsLLM provider with different base URLs
- Replace planned features with actual working code examples
- Add clear provider comparison table
- Show real imports and usage patterns
- Include streaming, thinking, and tool calling examples
- Update supported models to match current implementation
- Add multi-turn test to verify thinking and tool calling work together
- Test thinkingSignature handling for proper multi-turn context
- Fix Gemini provider to generate base64 thinkingSignature when needed
- Handle multiple rounds of tool calls in tests (Gemini behavior)
- Make thinking tests more robust for model-dependent behavior
- All 18 tests passing across 4 providers
- Added thinkingConfig with includeThoughts and thinkingBudget support
- Use part.thought boolean flag to detect thinking content per API docs
- Capture and preserve thought signatures for multi-turn function calling
- Added supportsThinking() check for Gemini 2.5 series models
- Updated example to demonstrate thinking configuration
- Handle SDK type limitations with proper type assertions
- Added GeminiLLM provider implementation with GoogleGenerativeAI SDK
- Supports streaming with text/thinking content and completion signals
- Handles Gemini's parts-based content system (text, thought, functionCall)
- Implements tool/function calling with proper format conversion
- Maps between unified types and Gemini-specific formats (model vs assistant role)
- Added test example matching other provider patterns
- Fixed typo in AssistantMessage type (stopResaon -> stopReason) across all providers
- Update LLMOptions interface to include completion boolean parameter
- Modify all providers to signal when text/thinking blocks are complete
- Update examples to handle the completion parameter
- Move documentation files to docs/ directory
- Implement OpenAICompletionsLLM for Chat Completions API with streaming
- Implement OpenAIResponsesLLM for Responses API with reasoning support
- Update types to use LLM/Context instead of AI/Request
- Add support for reasoning tokens, tool calls, and streaming
- Create test examples for both OpenAI providers
- Update Anthropic provider to match new interface
- Define clean API with complete() method and callbacks for streaming
- Add comprehensive type system for messages, tools, and usage
- Implement AnthropicAI provider with full feature support:
- Thinking/reasoning with signatures
- Tool calling with parallel execution
- Streaming via callbacks (onText, onThinking)
- Proper error handling and stop reasons
- Cache tracking for input/output tokens
- Add working test/example demonstrating tool execution flow
- Support for system prompts, temperature, max tokens
- Proper message role types: user, assistant, toolResult
- Set up @mariozechner/ai package structure following monorepo patterns
- Install OpenAI, Anthropic, and Google Gemini SDK dependencies
- Document comprehensive API investigation for all three providers
- Design minimal unified API with streaming-first architecture
- Add models.dev integration for pricing and capabilities
- Implement automatic caching strategy for all providers
- Update project documentation with package creation guide