Hook commands now always execute immediately, even during streaming.
If a command needs to interact with the LLM, it uses pi.sendMessage()
which handles queueing automatically.
This simplifies the API and eliminates the issue of queued slash
commands being sent to the LLM instead of executing.
- Add pkce.ts with generatePKCE() using Web Crypto API
- Update anthropic.ts, google-gemini-cli.ts, google-antigravity.ts
- Replace Buffer.from() with atob() for base64 decoding
- Works in both Node.js 20+ and browsers
The OAuth modules still use Node.js http.createServer for callbacks,
so they only work in CLI environments, but they no longer crash on
import in browser bundles.
- Renamed AppMessage to AgentMessage throughout
- New agent-loop.ts with AgentLoopContext, AgentLoopConfig
- Removed transport abstraction, Agent now takes streamFn directly
- Extracted streamProxy to proxy.ts utility
- Removed agent-loop from pi-ai (now in agent package)
- Updated consumers (coding-agent, mom) for AgentMessage rename
- Tests updated but some consumers still need migration
Known issues:
- AgentTool, AgentToolResult not exported from pi-ai
- Attachment not exported from pi-agent-core
- ProviderTransport removed but still referenced
- messageTransformer -> convertToLlm migration incomplete
- CustomMessages declaration merging not working properly
HookMessage (role: hookMessage) was being passed directly to transport
without transformation. Now it's transformed via messageTransformer
which converts it to a proper user message for the LLM.
Cleans up the temporary emitLastMessage plumbing since we now use
Agent.prompt(AppMessage) for hook messages instead of appendMessage+continue.
- Remove emitLastMessage parameter from Agent.continue()
- Remove from transport interface and implementations
- Remove from agentLoopContinue()
Following the same pattern as BashExecutionMessage:
- HookAppMessage has role: 'hookMessage' with customType, content, display, details
- isHookAppMessage() type guard for checking message type
- messageTransformer converts to user message for LLM context
- TUI checks isHookAppMessage() for rendering as CustomMessageComponent
This makes the API clean for anyone building on AgentSession - they can
use the type guard instead of knowing about internal marker fields.
When calling continue() with emitLastMessage=true, the agent loop
emits message_start/message_end events for the last message in context.
This allows messages added outside the loop (e.g., hook messages via
sendHookMessage) to trigger proper TUI rendering.
Changes across packages:
- packages/ai: agentLoopContinue() accepts emitLastMessage parameter
- packages/agent: Agent.continue(), transports updated to pass flag
- packages/coding-agent: sendHookMessage passes true when triggerTurn
- getEnvApiKey: ANTHROPIC_OAUTH_TOKEN now takes precedence over ANTHROPIC_API_KEY
- findCutPoint: Stop scan-backwards loop at session header (was decrementing past it causing null preparation)
- generateSummary/generateTurnPrefixSummary: Throw on stopReason=error instead of returning empty string
- Test files: Fix API key priority order, use keepRecentTokens=1 for small test conversations
- Migrate glm-4.5, glm-4.5-air, glm-4.5-flash, glm-4.6, glm-4.7 from anthropic-messages to openai-completions API
- Updated baseUrl from https://api.z.ai/api/anthropic to https://api.z.ai/api/coding/paas/v4
- Added compat setting to disable developer role for zai models
- Filter empty text blocks in openai-completions to avoid zai API validation errors
- Fixed zai provider tests to use OpenAI-style options (reasoningEffort)
- Change all zai models from anthropic-messages to openai-completions API
- Update baseUrl from https://api.z.ai/api/anthropic to https://api.z.ai/api/coding/paas/v4
- Add compat setting to disable developer role for zai
- Update zai provider tests to use OpenAI-style options (reasoningEffort instead of thinkingEnabled/thinkingBudgetTokens)
- Enable previously disabled thinking and image input tests for zai models
- Add AuthStorage class for credential storage (auth.json)
- Add ModelRegistry class for model management with API key resolution
- Add discoverAuthStorage() and discoverModels() discovery functions
- Add migration from legacy oauth.json and settings.json apiKeys to auth.json
- Remove configureOAuthStorage, defaultGetApiKey, findModel, discoverAvailableModels
- Remove apiKeys from Settings type and SettingsManager methods
- Rename getOAuthPath to getAuthPath
- Update SDK, examples, docs, tests, and mom package
Fixes#296
- Add src/cli.ts with login command for OAuth providers
- Add bin entry to package.json for 'npx @mariozechner/pi-ai'
- Update README: remove setApiKey docs, rewrite OAuth section
- OAuth storage is caller's responsibility, not library's
- Use getOAuthProviders() instead of duplicating provider list
- Remove setApiKey, resolveApiKey, and global apiKeys Map from stream.ts
- Rename getApiKey to getApiKeyFromEnv (only checks env vars)
- Remove OAuth storage layer (storage.ts deleted)
- OAuth login/refresh functions now return credentials instead of saving
- getOAuthApiKey/refreshOAuthToken now take credentials as params
- Add test/oauth.ts helper for ai package tests
- Simplify root npm run check (single biome + tsgo pass)
- Remove redundant check scripts from most packages
- Add web-ui and coding-agent examples to biome/tsgo includes
coding-agent still has compile errors - needs refactoring for new API
- Merge branch event into session with before_branch/branch reasons
- Add before_switch, before_clear, shutdown reasons
- before_* events can be cancelled with { cancel: true }
- Update RPC commands to return cancelled status
- Add shutdown event on process exit
- New example hooks: confirm-destructive, dirty-repo-guard, auto-commit-on-exit
fixes#278
- Add setOAuthStorage() and resetOAuthStorage() to pi-ai for custom storage backends
- Configure coding-agent to use its own configurable OAuth path via getOAuthPath()
- Model selector (/model command) now only shows models from --models scope when set
- Rewrite OAuth documentation in pi-ai README with examples
Fixes#255
When a user interrupts a tool call flow (sends a message without providing
tool results), APIs like OpenAI Responses and Anthropic fail because:
- OpenAI requires tool outputs for function calls
- OpenAI requires reasoning items to have their following items
- Anthropic requires non-empty content for error tool results
Instead of filtering out orphaned tool calls (which breaks thinking signatures),
we now insert synthetic empty tool results with isError: true and content
'No result provided'. This preserves the conversation structure and satisfies
all API requirements.
- Add OAuth handler with PKCE flow and local callback server
- Automatic project discovery via loadCodeAssist/onboardUser endpoints
- Store credentials with projectId for API calls
- Encode token+projectId as JSON for provider to decode
- Register as 'google-cloud-code-assist' OAuth provider