Previously all openai-codex models had pricing set to 0, causing the
TUI to always show $0.00 for cost tracking.
Updated pricing based on OpenAI Standard tier rates:
- gpt-5.2/gpt-5.2-codex: $1.75/$14.00 per 1M tokens
- gpt-5.1/gpt-5.1-codex/gpt-5.1-codex-max: $1.25/$10.00 per 1M tokens
- gpt-5/gpt-5-codex: $1.25/$10.00 per 1M tokens
- codex-mini-latest: $1.50/$6.00 per 1M tokens
- gpt-5-mini/gpt-5.1-codex-mini/gpt-5-codex-mini: $0.25/$2.00 per 1M tokens
- gpt-5-nano: $0.05/$0.40 per 1M tokens
Source: https://platform.openai.com/docs/pricing
- Remove per-thinking-level model variants (gpt-5.2-codex-high, etc.)
- Remove thinkingLevels from Model type
- Provider clamps reasoning effort internally
- Omit reasoning field when thinking is off
fixes#472
Previously the system prompt was converted to an input message in convertMessages,
then stripped out by filterPiSystemPrompts. Now the system prompt is passed directly
to transformRequestBody and appended after CODEX_PI_BRIDGE in the bridge message.
- /tools command opens SettingsList-based selector for all loaded tools
- Space/Enter toggles individual tools between enabled/disabled
- Changes apply immediately on toggle (like /settings)
- Tool selection persisted to session via appendEntry()
- State restored from current branch on session_start, session_tree, session_branch
- Uses getBranch() to respect branch-specific tool configuration
- Export getSettingsListTheme and getSelectListTheme for hooks to use
- Implement google-vertex provider in packages/ai
- Support ADC (Application Default Credentials) via @google/generative-ai
- Add Gemini model catalog for Vertex AI
- Update packages/coding-agent to handle google-vertex provider
- Add tree sidebar with search and filter (Default/All/Labels)
- Client-side markdown/syntax highlighting via vendored marked.js + highlight.js
- Base64 encode session data to avoid escaping issues
- Reuse theme.ts color tokens via getResolvedThemeColors()
- Sticky sidebar, responsive mobile layout with overlay
- Click tree node to scroll to message
- Keyboard shortcuts: Esc to reset, Ctrl/Cmd+F to search
- Check response.stopReason instead of catching errors for abort detection
- Return result object from _generateBranchSummary instead of throwing
- Fix summary attachment: attach to navigation target position, not old branch
- Support root-level summaries (parentId=null) in branchWithSummary
- Remove setTimeout hack for re-showing tree selector on abort
- Add TreeSelectorComponent with ASCII tree visualization
- Add AgentSession.navigateTree() for switching branches
- Add session_before_tree/session_tree hook events
- Add SessionManager.resetLeaf() for navigating to root
- Change leafId from string to string|null for consistency with parentId
- Support optional branch summarization when switching
- Update buildSessionContext() to handle null leafId
- Add /tree to slash commands in interactive mode
Hook commands now always execute immediately, even during streaming.
If a command needs to interact with the LLM, it uses pi.sendMessage()
which handles queueing automatically.
This simplifies the API and eliminates the issue of queued slash
commands being sent to the LLM instead of executing.
HookMessage (role: hookMessage) was being passed directly to transport
without transformation. Now it's transformed via messageTransformer
which converts it to a proper user message for the LLM.
Following the same pattern as BashExecutionMessage:
- HookAppMessage has role: 'hookMessage' with customType, content, display, details
- isHookAppMessage() type guard for checking message type
- messageTransformer converts to user message for LLM context
- TUI checks isHookAppMessage() for rendering as CustomMessageComponent
This makes the API clean for anyone building on AgentSession - they can
use the type guard instead of knowing about internal marker fields.
- getEnvApiKey: ANTHROPIC_OAUTH_TOKEN now takes precedence over ANTHROPIC_API_KEY
- findCutPoint: Stop scan-backwards loop at session header (was decrementing past it causing null preparation)
- generateSummary/generateTurnPrefixSummary: Throw on stopReason=error instead of returning empty string
- Test files: Fix API key priority order, use keepRecentTokens=1 for small test conversations
- Migrate glm-4.5, glm-4.5-air, glm-4.5-flash, glm-4.6, glm-4.7 from anthropic-messages to openai-completions API
- Updated baseUrl from https://api.z.ai/api/anthropic to https://api.z.ai/api/coding/paas/v4
- Added compat setting to disable developer role for zai models
- Filter empty text blocks in openai-completions to avoid zai API validation errors
- Fixed zai provider tests to use OpenAI-style options (reasoningEffort)
- Change all zai models from anthropic-messages to openai-completions API
- Update baseUrl from https://api.z.ai/api/anthropic to https://api.z.ai/api/coding/paas/v4
- Add compat setting to disable developer role for zai
- Update zai provider tests to use OpenAI-style options (reasoningEffort instead of thinkingEnabled/thinkingBudgetTokens)
- Enable previously disabled thinking and image input tests for zai models
- Add AuthStorage class for credential storage (auth.json)
- Add ModelRegistry class for model management with API key resolution
- Add discoverAuthStorage() and discoverModels() discovery functions
- Add migration from legacy oauth.json and settings.json apiKeys to auth.json
- Remove configureOAuthStorage, defaultGetApiKey, findModel, discoverAvailableModels
- Remove apiKeys from Settings type and SettingsManager methods
- Rename getOAuthPath to getAuthPath
- Update SDK, examples, docs, tests, and mom package
Fixes#296
- Merge branch event into session with before_branch/branch reasons
- Add before_switch, before_clear, shutdown reasons
- before_* events can be cancelled with { cancel: true }
- Update RPC commands to return cancelled status
- Add shutdown event on process exit
- New example hooks: confirm-destructive, dirty-repo-guard, auto-commit-on-exit
fixes#278
- Add new API type 'google-cloud-code-assist' for Gemini CLI / Antigravity auth
- Extract shared Google utilities to google-shared.ts
- Implement streaming provider for Cloud Code Assist endpoint
- Add 7 models: gemini-3-pro-high/low, gemini-3-flash, claude-sonnet/opus, gpt-oss
Models use OAuth authentication and have sh cost (uses Google account quota).
OAuth flow will be implemented in coding-agent in a follow-up.