- Create Pi Reader browser extension for Chrome/Firefox
- Chrome uses Side Panel API, Firefox uses Sidebar Action API
- Supports both browsers with separate manifests and unified codebase
- Built with mini-lit components and Tailwind CSS v4
- Features model selection dialog with Ollama support
- Hot reload development server watches both browser builds
- Add useDefineForClassFields: false to fix LitElement reactivity
- Added partial-json package for parsing incomplete JSON during streaming
- Tool call arguments now contain partially parsed JSON during toolcall_delta events
- Enables progressive UI updates (e.g., showing file paths before content is complete)
- Arguments are always valid objects (minimum empty {}), never undefined
- Full validation still occurs at toolcall_end when arguments are complete
- Updated all providers (Anthropic, OpenAI Completions/Responses) to use parseStreamingJson
- Added comprehensive documentation and examples in README
- Added test to verify arguments are always defined during streaming
- Replace JSON Schema with Zod schemas for tool parameter definitions
- Add runtime validation for all tool calls at provider level
- Create shared validation module with detailed error formatting
- Update Agent API with comprehensive event system
- Add agent tests with calculator tool for multi-turn execution
- Add abort test to verify proper handling of aborted requests
- Update documentation with detailed event flow examples
- Rename generate.ts to stream.ts for clarity
- Test handling of empty content arrays
- Test handling of empty string content
- Test handling of whitespace-only content
- All providers handle these edge cases gracefully
- Added contentSignature tracking for assistant messages
- Fixed message format in convertToResponsesFormat (output_text instead of input_text)
- Properly preserve message IDs for multi-turn conversations
- Added proper ResponseOutputMessage type satisfaction
- Updated tests to cover more providers and multi-turn scenarios
- Moved completed AI package implementation task to done folder
- Task successfully implemented the unified AI API (@mariozechner/pi-ai)
- Package renamed, documentation improved, and published as v0.5.12
- Added note that library only includes tool-calling capable models
- Added Model Discovery section showing how to enumerate models
- Added examples for finding models with specific capabilities
- Added cache read/write costs to model capabilities display
- Clarified that models are auto-fetched from APIs at build time
- Changed package name from @mariozechner/ai to @mariozechner/pi-ai
- Fixed generate-models.ts to fetch from models.dev API instead of local file
- Completely rewrote README with practical examples:
- Image input with base64 encoding
- Proper tool calling with context management
- Streaming with completion indicators
- Abort signal usage
- Provider-specific options (reasoning/thinking)
- Custom model definitions for local/self-hosted LLMs
- Environment variables explanation
- Bumped version to 0.5.9 and published
- Added image tests to OpenAI Completions (gpt-4o-mini)
- Added image tests to Anthropic (claude-sonnet-4-0)
- Added image tests to Google (gemini-2.5-flash)
- Tests verify models can process and describe the red circle test image
- Add Model interface to types.ts with normalized structure
- Create type-safe generic createLLM function with provider-specific model constraints
- Generate models from OpenRouter API and models.dev data
- Strip provider prefixes for direct providers (google, openai, anthropic, xai)
- Keep full model IDs for OpenRouter-proxied models
- Clean separation: types.ts (Model interface), models.ts (factory logic), models.generated.ts (data)
- Remove old model scripts and unused dependencies
- Rename GeminiLLM to GoogleLLM for consistency
- Add tests for new providers (xAI, Groq, Cerebras, OpenRouter)
- Support 181 tool-capable models across 7 providers with full type safety
- Switch from Node.js test runner to Vitest for better DX
- Add test suites for Grok, Groq, Cerebras, and OpenRouter providers
- Add Ollama test suite with automatic server lifecycle management
- Include thinking mode and multi-turn tests for all providers
- Remove example files (consolidated into test suite)
- Add VS Code test configuration
- Generate models.generated.ts from models.json with proper types
- Categorize providers: OpenAI (Responses), OpenAI-compatible, Anthropic, Gemini
- Create createLLM() factory with TypeScript overloads for type safety
- Auto-detect base URLs and environment variables for providers
- Support 353 models across 39 providers with full autocompletion
- Exclude generated file from git (rebuilt on npm build)
- Add models script to download latest model information
- Create models.ts module to query model capabilities
- Include models.json in package distribution
- Export utilities to check model features (reasoning, tools)
- Update build process to copy models.json to dist
- Add multi-turn test to verify thinking and tool calling work together
- Test thinkingSignature handling for proper multi-turn context
- Fix Gemini provider to generate base64 thinkingSignature when needed
- Handle multiple rounds of tool calls in tests (Gemini behavior)
- Make thinking tests more robust for model-dependent behavior
- All 18 tests passing across 4 providers
- Added GeminiLLM provider implementation with GoogleGenerativeAI SDK
- Supports streaming with text/thinking content and completion signals
- Handles Gemini's parts-based content system (text, thought, functionCall)
- Implements tool/function calling with proper format conversion
- Maps between unified types and Gemini-specific formats (model vs assistant role)
- Added test example matching other provider patterns
- Fixed typo in AssistantMessage type (stopResaon -> stopReason) across all providers