- Export UserMessageWithAttachments type from web-ui index
- Remove unused i18n import from extract-document tool
- Update models.generated.ts (model order changes from API)
This fixes type errors when sitegeist uses custom user message renderer
The negated approach broke browser_script which runs in page context.
browser_script injects into web pages (http://, https://) where
chrome.runtime.sendMessage doesn't work. The original whitelist
approach correctly identifies extension contexts only.
The real issue with artifacts in sandbox iframes needs a different fix.
- Update renderHeader and renderCollapsibleHeader in renderer-registry.ts to accept `text: string | TemplateResult`
- Remove duplicated renderCollapsibleHeaderWithPill helper in artifacts-tool-renderer.ts
- Update all artifact renderer calls to use renderHeaderWithPill() inline
- Remove all separate pill rendering below headers
This allows artifact pills to be rendered inline with header text without code duplication.
- Move mini-lit from dependencies to peerDependencies
- Keep in devDependencies for development
- Prevents bundlers from including mini-lit when consuming pi-web-ui
- Consumer (sitegeist) provides mini-lit, esbuild bundles it once
- Fixes duplicate mini-lit bundling issue permanently
- Create Pi Reader browser extension for Chrome/Firefox
- Chrome uses Side Panel API, Firefox uses Sidebar Action API
- Supports both browsers with separate manifests and unified codebase
- Built with mini-lit components and Tailwind CSS v4
- Features model selection dialog with Ollama support
- Hot reload development server watches both browser builds
- Add useDefineForClassFields: false to fix LitElement reactivity
- Add 'aborted' as a distinct stop reason separate from 'error'
- Change AssistantMessage.error to errorMessage for clarity
- Update error event to include reason field ('error' | 'aborted')
- Map provider-specific safety/refusal reasons to 'error' stop reason
- Reorganize utility functions into utils/ directory
- Rename agent.ts to agent-loop.ts for better clarity
- Fix error handling in all providers to properly distinguish abort from error
- Added partial-json package for parsing incomplete JSON during streaming
- Tool call arguments now contain partially parsed JSON during toolcall_delta events
- Enables progressive UI updates (e.g., showing file paths before content is complete)
- Arguments are always valid objects (minimum empty {}), never undefined
- Full validation still occurs at toolcall_end when arguments are complete
- Updated all providers (Anthropic, OpenAI Completions/Responses) to use parseStreamingJson
- Added comprehensive documentation and examples in README
- Added test to verify arguments are always defined during streaming
- Replace JSON Schema with Zod schemas for tool parameter definitions
- Add runtime validation for all tool calls at provider level
- Create shared validation module with detailed error formatting
- Update Agent API with comprehensive event system
- Add agent tests with calculator tool for multi-turn execution
- Add abort test to verify proper handling of aborted requests
- Update documentation with detailed event flow examples
- Rename generate.ts to stream.ts for clarity
- Add 'zai' as a KnownProvider type
- Add ZAI_API_KEY environment variable mapping
- Generate 4 zAI models (glm-4.5-air, glm-4.5v, etc.) using anthropic-messages API
- Add comprehensive test coverage for zAI provider in generate.test.ts and empty.test.ts
- Models support reasoning/thinking capabilities and tool calling
- Anthropic API requires tool call IDs to match pattern ^[a-zA-Z0-9_-]+$
- OpenAI Responses API generates IDs with pipe character (|) which breaks Anthropic
- Added sanitizeToolCallId() to replace invalid characters with underscores
- Fixes cross-provider handoffs from OpenAI Responses to Anthropic
- Added test to verify the fix works
- Replace createLLM with getModel/getModels/getProviders functions
- Rename PROVIDERS to MODELS (internal only, not exposed)
- Add streamSimple/completeSimple for unified reasoning interface
- Update README with new API examples and comprehensive documentation
- Remove model registration (models are now fixed from build time)
- Add proper TypeScript typing for provider-specific options
- Document context serialization, cross-provider handoffs, and browser usage
- Updated generate-models.ts to fetch these providers directly from models.dev API
- OpenRouter now only used for xAI and other third-party providers
- Fixed test model IDs to match new model names from models.dev
- Removed unused import from google.ts
- Add API changes section explaining v0.5.15+ changes
- Update Quick Start example to show content array usage
- Update Tool Calling example to filter tool calls from content blocks
- Update Streaming example to use new onEvent callback
- Fix model IDs in provider-specific examples
- Added contentSignature tracking for assistant messages
- Fixed message format in convertToResponsesFormat (output_text instead of input_text)
- Properly preserve message IDs for multi-turn conversations
- Added proper ResponseOutputMessage type satisfaction
- Updated tests to cover more providers and multi-turn scenarios
- Added note that library only includes tool-calling capable models
- Added Model Discovery section showing how to enumerate models
- Added examples for finding models with specific capabilities
- Added cache read/write costs to model capabilities display
- Clarified that models are auto-fetched from APIs at build time
- Changed package name from @mariozechner/ai to @mariozechner/pi-ai
- Fixed generate-models.ts to fetch from models.dev API instead of local file
- Completely rewrote README with practical examples:
- Image input with base64 encoding
- Proper tool calling with context management
- Streaming with completion indicators
- Abort signal usage
- Provider-specific options (reasoning/thinking)
- Custom model definitions for local/self-hosted LLMs
- Environment variables explanation
- Bumped version to 0.5.9 and published
- LLM constructors now take Model objects instead of string IDs
- Added provider field to AssistantMessage interface
- Updated getModel function with type-safe model ID autocomplete
- Fixed Anthropic model ID mapping for proper API aliases
- Added baseUrl to Model interface for provider-specific endpoints
- Updated all tests to use getModel for model instantiation
- Removed deprecated models.json in favor of generated models
- Add proper Anthropic model aliases (claude-opus-4-1, claude-sonnet-4-0, etc.)
- Deduplicate models when same ID appears in both models.dev and OpenRouter
- models.dev takes priority over OpenRouter for duplicate IDs
- Fix test to use correct claude-3-5-haiku-latest alias
- Reduces Anthropic models from 11 to 10 (removed duplicate)
- Generated from OpenRouter API and models.dev
- Includes models from Google, OpenAI, Anthropic, xAI, Groq, Cerebras, OpenRouter
- Provides type-safe model selection with autocomplete