Commit graph

32 commits

Author SHA1 Message Date
Mario Zechner
deee1c2952 Release v0.22.3 2025-12-16 20:06:05 +01:00
Mario Zechner
7ac832586f Add tool result streaming
- Add AgentToolUpdateCallback type and optional onUpdate callback to AgentTool.execute()
- Add tool_execution_update event with toolCallId, toolName, args, partialResult
- Normalize tool_execution_end to always use AgentToolResult (no more string fallback)
- Bash tool streams truncated rolling buffer output during execution
- ToolExecutionComponent shows last N lines when collapsed (not first N)
- Interactive mode handles tool_execution_update events
- Update RPC docs and ai/agent READMEs

fixes #44
2025-12-16 14:53:17 +01:00
Mario Zechner
8319628bc3 Add changelog entries for X-Initiator header support (#200) 2025-12-16 14:34:56 +01:00
Mario Zechner
fbda78bfb3 Fix reasoning disabled by default for all providers
Previously, when reasoning was not specified, some providers like Gemini
with 'dynamic thinking' enabled by default would still use thinking.
Now explicitly sets thinkingEnabled: false (Anthropic) and
thinking: { enabled: false } (Google) when reasoning is undefined.

Closes #180
2025-12-15 22:42:08 +01:00
Mario Zechner
fd5134f88c Release v0.22.2 2025-12-15 22:09:14 +01:00
Mario Zechner
a7e3b8625b Release v0.22.1 2025-12-15 21:53:27 +01:00
Mario Zechner
04058d5812 Release v0.22.0 2025-12-15 20:14:25 +01:00
Mario Zechner
8df24b48ab Update docs and changelogs for GitHub Copilot changes 2025-12-15 20:08:06 +01:00
Mario Zechner
b66157c649 Add GitHub Copilot support (#191)
- OAuth login for GitHub Copilot via /login command
- Support for github.com and GitHub Enterprise
- Models sourced from models.dev (Claude, GPT, Gemini, Grok, etc.)
- Dynamic base URL from token's proxy-ep field
- Use vscode-chat integration ID for API compatibility
- Documentation for model enablement at github.com/settings/copilot/features

Co-authored-by: cau1k <cau1k@users.noreply.github.com>
2025-12-15 19:05:17 +01:00
Mario Zechner
b9fe556d5b Add changelog entry for #176 (Gemini 3 Pro thinking levels) 2025-12-13 02:10:38 +01:00
Mario Zechner
b0628786a7 Add [Unreleased] section 2025-12-10 23:40:27 +01:00
Mario Zechner
b40ecf0ee1 Release v0.18.2 2025-12-10 23:39:16 +01:00
Mario Zechner
bb445d24f1 Auto-retry on transient provider errors (overloaded, rate limit, 5xx)
- Add retry logic with exponential backoff (2s, 4s, 8s) in AgentSession
- Disable Anthropic SDK built-in retries (maxRetries: 0) to allow app-level handling
- TUI shows retry status with Escape to cancel
- RPC mode: add set_auto_retry, abort_retry commands and auto_retry_start/end events
- Configurable via settings.json: retry.enabled, retry.maxRetries, retry.baseDelayMs
- Exclude context overflow errors from retry (handled by compaction)

fixes #157
2025-12-10 23:36:46 +01:00
Mario Zechner
751e10e78b Add [Unreleased] section to changelogs 2025-12-10 21:41:50 +01:00
Mario Zechner
f931c57726 Release v0.18.1 2025-12-10 21:25:15 +01:00
Mario Zechner
1ce09681b1 Add issue links to changelog entries 2025-12-10 21:24:05 +01:00
Mario Zechner
c91afd76f1 Update changelogs for recent fixes 2025-12-10 21:23:08 +01:00
Mario Zechner
99b4b1aca0 Add Mistral as AI provider
- Add Mistral to KnownProvider type and model generation
- Implement Mistral-specific compat handling in openai-completions:
  - requiresToolResultName: tool results need name field
  - requiresAssistantAfterToolResult: synthetic assistant message between tool/user
  - requiresThinkingAsText: thinking blocks as <thinking> text
  - requiresMistralToolIds: tool IDs must be exactly 9 alphanumeric chars
- Add MISTRAL_API_KEY environment variable support
- Add Mistral tests across all test files
- Update documentation (README, CHANGELOG) for both ai and coding-agent packages
- Remove client IDs from gemini.md, reference upstream source instead

Closes #165
2025-12-10 20:36:19 +01:00
Lukas Pitschl
a248e2547a
fix(ai): remove global process.env.ANTHROPIC_API_KEY deletion (#164)
* fix(ai): remove global process.env.ANTHROPIC_API_KEY deletion

The code was deleting process.env.ANTHROPIC_API_KEY to prevent the SDK
from using it when OAuth tokens were provided. However, this was a global
mutation that affected the entire Node.js process, causing the API key to
be unavailable after the first prompt.

The Anthropic SDK constructor already handles credential selection via
parameters (apiKey: null, authToken: token for OAuth vs apiKey: key for
regular keys), so the environment variable deletion was unnecessary.

* Update CHANGELOG.md for API key fix
2025-12-10 18:12:16 +01:00
Mario Zechner
55032f1697 Add [Unreleased] section to changelogs 2025-12-09 21:51:01 +01:00
Mario Zechner
2d9ecd1750 Release v0.17.0 2025-12-09 21:47:39 +01:00
Mario Zechner
5a9d844f9a Simplify compaction: remove proactive abort, use Agent.continue() for retry
- Add agentLoopContinue() to pi-ai for resuming from existing context
- Add Agent.continue() method and transport.continue() interface
- Simplify AgentSession compaction to two cases: overflow (auto-retry) and threshold (no retry)
- Remove proactive mid-turn compaction abort
- Merge turn prefix summary into main summary
- Add isCompacting property to AgentSession and RPC state
- Block input during compaction in interactive mode
- Show compaction count on session resume
- Rename RPC.md to rpc.md for consistency

Related to #128
2025-12-09 21:43:49 +01:00
Mario Zechner
00370cab39 Add xhigh thinking level for OpenAI codex-max models
- Add 'xhigh' to ThinkingLevel type in ai and agent packages
- Map xhigh to reasoning_effort: 'max' for OpenAI providers
- Add thinkingXhigh color token to theme schema and built-in themes
- Show xhigh option only when using codex-max models
- Update CHANGELOG for both ai and coding-agent packages

closes #143
2025-12-08 21:12:54 +01:00
Mario Zechner
87a1a9ded4 Add OpenAICompat for openai-completions provider quirks
Fixes #133
2025-12-08 19:02:03 +01:00
Mario Zechner
8bec289dc6 Remove provider-level tool validation, add validateToolCall helper 2025-12-08 18:04:33 +01:00
Markus Ylisiurunen
0196308266 add option to skip provider tool call validation 2025-12-07 17:24:06 +02:00
Mario Zechner
2641424bfa Add [Unreleased] section to CHANGELOG 2025-12-06 22:49:45 +01:00
Mario Zechner
ecdbd88f5d Release v0.13.0 2025-12-06 22:48:39 +01:00
Mario Zechner
86e5a70ec4 Add totalTokens field to Usage type
- Added totalTokens field to Usage interface in pi-ai
- Anthropic: computed as input + output + cacheRead + cacheWrite
- OpenAI/Google: uses native total_tokens/totalTokenCount
- Fixed openai-completions to compute totalTokens when reasoning tokens present
- Updated calculateContextTokens() to use totalTokens field
- Added comprehensive test covering 13 providers

fixes #130
2025-12-06 22:46:02 +01:00
Mario Zechner
c7585e37c9 Release v0.12.10 2025-12-04 20:51:57 +01:00
Mario Zechner
989af79752 fix: normalize OpenAI token counting, add branch source tracking
pi-ai:
- Fixed usage.input to exclude cached tokens for OpenAI providers
- Previously input included cached tokens, causing double-counting
- Now input + output + cacheRead + cacheWrite correctly gives total context

coding-agent:
- Session header now includes branchedFrom field for branched sessions
- Updated compaction.md with refined implementation plan
- Updated session.md with branchedFrom documentation
2025-12-03 17:11:22 +01:00
Mario Zechner
213bc4df1c mom: add centralized logging, usage tracking, and improve prompt caching
Major improvements to mom's logging and cost reporting:

Centralized Logging System:
- Add src/log.ts with type-safe logging functions
- Colored console output (green=user, yellow=mom, dim=details)
- Consistent format: [HH:MM:SS] [context] message
- Replace scattered console.log/error calls throughout codebase

Usage Tracking & Cost Reporting:
- Track tokens (input, output, cache read/write) and costs per run
- Display summary at end of each run in console and Slack thread
- Example: 💰 Usage: 12,543 in + 847 out (5,234 cache read) = $0.0234

Prompt Caching Optimization:
- Move recent messages from system prompt to user message
- System prompt now mostly static (only changes with memory files)
- Enables effective use of Anthropic's prompt caching
- Significantly reduces costs on subsequent requests

Model & Cost Improvements:
- Switch from Claude Opus 4.5 to Sonnet 4.5 (~40% cost reduction)
- Fix Claude Opus 4.5 cache pricing in ai package (was 3x too expensive)
- Add manual override in generate-models.ts until upstream fix merges
- Submitted PR to models.dev: https://github.com/sst/models.dev/pull/439

UI/UX Improvements:
- Extract actual text from tool results instead of JSON wrapper
- Cleaner Slack thread formatting with duration and labels
- Tool args formatting shows paths with offset:limit notation
- Add chalk for colored terminal output

Dependencies:
- Add chalk package for terminal colors
2025-11-26 18:04:16 +01:00