mirror of
https://github.com/getcompanion-ai/co-mono.git
synced 2026-04-21 23:04:41 +00:00
Removed agent-old
This commit is contained in:
parent
1f9a3a00cc
commit
92bad8619c
15 changed files with 1 additions and 4143 deletions
|
|
@ -1,432 +0,0 @@
|
||||||
# pi-agent
|
|
||||||
|
|
||||||
A general-purpose agent with tool calling and session persistence, modeled after Claude Code but extremely hackable and minimal. It comes with a built-in TUI (also modeled after Claude Code) for interactive use.
|
|
||||||
|
|
||||||
Everything is designed to be easy:
|
|
||||||
- Writing custom UIs on top of it (via JSON mode in any language or the TypeScript API)
|
|
||||||
- Using it for inference steps in deterministic programs (via JSON mode in any language or the TypeScript API)
|
|
||||||
- Providing your own system prompts and tools
|
|
||||||
- Working with various LLM providers or self-hosted LLMs
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
npm install -g @mariozechner/pi-agent
|
|
||||||
```
|
|
||||||
|
|
||||||
This installs the `pi-agent` command globally.
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
By default, pi-agent uses OpenAI's API with model `gpt-5-mini` and authenticates using the `OPENAI_API_KEY` environment variable. Any OpenAI-compatible endpoint works, including Ollama, vLLM, OpenRouter, Groq, Anthropic, etc.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Single message
|
|
||||||
pi-agent "What is 2+2?"
|
|
||||||
|
|
||||||
# Multiple messages processed sequentially
|
|
||||||
pi-agent "What is 2+2?" "What about 3+3?"
|
|
||||||
|
|
||||||
# Interactive chat mode (no messages = interactive)
|
|
||||||
pi-agent
|
|
||||||
|
|
||||||
# Continue most recently modified session in current directory
|
|
||||||
pi-agent --continue "Follow up question"
|
|
||||||
|
|
||||||
# GPT-OSS via Groq
|
|
||||||
pi-agent --base-url https://api.groq.com/openai/v1 --api-key $GROQ_API_KEY --model openai/gpt-oss-120b
|
|
||||||
|
|
||||||
# GLM 4.5 via OpenRouter
|
|
||||||
pi-agent --base-url https://openrouter.ai/api/v1 --api-key $OPENROUTER_API_KEY --model z-ai/glm-4.5
|
|
||||||
|
|
||||||
# Claude via Anthropic's OpenAI compatibility layer. See: https://docs.anthropic.com/en/api/openai-sdk
|
|
||||||
pi-agent --base-url https://api.anthropic.com/v1 --api-key $ANTHROPIC_API_KEY --model claude-opus-4-1-20250805
|
|
||||||
|
|
||||||
# Gemini via Google AI
|
|
||||||
pi-agent --base-url https://generativelanguage.googleapis.com/v1beta/openai/ --api-key $GEMINI_API_KEY --model gemini-2.5-flash
|
|
||||||
```
|
|
||||||
|
|
||||||
## Usage Modes
|
|
||||||
|
|
||||||
### Single-Shot Mode
|
|
||||||
Process one or more messages and exit:
|
|
||||||
```bash
|
|
||||||
pi-agent "First question" "Second question"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Interactive Mode
|
|
||||||
Start an interactive chat session:
|
|
||||||
```bash
|
|
||||||
pi-agent
|
|
||||||
```
|
|
||||||
- Type messages and press Enter to send
|
|
||||||
- Type `exit` or `quit` to end session
|
|
||||||
- Press Escape to interrupt while processing
|
|
||||||
- Press CTRL+C to clear the text editor
|
|
||||||
- Press CTRL+C twice quickly to exit
|
|
||||||
|
|
||||||
### JSON Mode
|
|
||||||
JSON mode enables programmatic integration by outputting events as JSONL (JSON Lines).
|
|
||||||
|
|
||||||
**Single-shot mode:** Outputs a stream of JSON events for each message, then exits.
|
|
||||||
```bash
|
|
||||||
pi-agent --json "What is 2+2?" "And the meaning of life?"
|
|
||||||
# Outputs: {"type":"session_start","sessionId":"bb6f0acb-80cf-4729-9593-bcf804431a53","model":"gpt-5-mini","api":"completions","baseURL":"https://api.openai.com/v1","systemPrompt":"You are a helpful assistant."} {"type":"user_message","text":"What is 2+2?"} {"type":"assistant_start"} {"type":"token_usage","inputTokens":314,"outputTokens":16,"totalTokens":330,"cacheReadTokens":0,"cacheWriteTokens":0} {"type":"assistant_message","text":"2 + 2 = 4"} {"type":"user_message","text":"And the meaning of life?"} {"type":"assistant_start"} {"type":"token_usage","inputTokens":337,"outputTokens":331,"totalTokens":668,"cacheReadTokens":0,"cacheWriteTokens":0} {"type":"assistant_message","text":"Short answer (pop-culture): 42.\n\nMore useful answers:\n- Philosophical...
|
|
||||||
```
|
|
||||||
|
|
||||||
**Interactive mode:** Accepts JSON commands via stdin and outputs JSON events to stdout.
|
|
||||||
```bash
|
|
||||||
# Start interactive JSON mode
|
|
||||||
pi-agent --json
|
|
||||||
# Now send commands via stdin
|
|
||||||
|
|
||||||
# Pipe one or more initial messages in
|
|
||||||
(echo '{"type": "message", "content": "What is 2+2?"}'; cat) | pi-agent --json
|
|
||||||
# Outputs: {"type":"session_start","sessionId":"bb64cfbe-dd52-4662-bd4a-0d921c332fd1","model":"gpt-5-mini","api":"completions","baseURL":"https://api.openai.com/v1","systemPrompt":"You are a helpful assistant."} {"type":"user_message","text":"What is 2+2?"} {"type":"assistant_start"} {"type":"token_usage","inputTokens":314,"outputTokens":16,"totalTokens":330,"cacheReadTokens":0,"cacheWriteTokens":0} {"type":"assistant_message","text":"2 + 2 = 4"}
|
|
||||||
```
|
|
||||||
|
|
||||||
Commands you can send via stdin in interactive JSON mode:
|
|
||||||
```json
|
|
||||||
{"type": "message", "content": "Your message here"} // Send a message to the agent
|
|
||||||
{"type": "interrupt"} // Interrupt current processing
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Command Line Options
|
|
||||||
```
|
|
||||||
--base-url <url> API base URL (default: https://api.openai.com/v1)
|
|
||||||
--api-key <key> API key (or set OPENAI_API_KEY env var)
|
|
||||||
--model <model> Model name (default: gpt-4o-mini)
|
|
||||||
--api <type> API type: "completions" or "responses" (default: completions)
|
|
||||||
--system-prompt <text> System prompt (default: "You are a helpful assistant.")
|
|
||||||
--continue Continue previous session
|
|
||||||
--json JSON mode
|
|
||||||
--help, -h Show help message
|
|
||||||
```
|
|
||||||
|
|
||||||
## Session Persistence
|
|
||||||
|
|
||||||
Sessions are automatically saved to `~/.pi/sessions/` and include:
|
|
||||||
- Complete conversation history
|
|
||||||
- Tool call results
|
|
||||||
- Token usage statistics
|
|
||||||
|
|
||||||
Use `--continue` to resume the last session:
|
|
||||||
```bash
|
|
||||||
pi-agent "Start a story about a robot"
|
|
||||||
# ... later ...
|
|
||||||
pi-agent --continue "Continue the story"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tools
|
|
||||||
|
|
||||||
The agent includes built-in tools for file system operations:
|
|
||||||
- **read_file** - Read file contents
|
|
||||||
- **list_directory** - List directory contents
|
|
||||||
- **bash** - Execute shell commands
|
|
||||||
- **glob** - Find files by pattern
|
|
||||||
- **ripgrep** - Search file contents
|
|
||||||
|
|
||||||
These tools are automatically available when using the agent through the `pi` command for code navigation tasks.
|
|
||||||
|
|
||||||
## JSON Mode Events
|
|
||||||
|
|
||||||
When using `--json`, the agent outputs these event types:
|
|
||||||
- `session_start` - New session started with metadata
|
|
||||||
- `user_message` - User input
|
|
||||||
- `assistant_start` - Assistant begins responding
|
|
||||||
- `assistant_message` - Assistant's response
|
|
||||||
- `reasoning` - Reasoning/thinking (for models that support it)
|
|
||||||
- `tool_call` - Tool being called
|
|
||||||
- `tool_result` - Result from tool
|
|
||||||
- `token_usage` - Token usage statistics (includes `reasoningTokens` for models with reasoning)
|
|
||||||
- `error` - Error occurred
|
|
||||||
- `interrupted` - Processing was interrupted
|
|
||||||
|
|
||||||
The complete TypeScript type definition for `AgentEvent` can be found in [`src/agent.ts`](src/agent.ts#L6).
|
|
||||||
|
|
||||||
## Build an Interactive UI with JSON Mode
|
|
||||||
Build custom UIs in any language by spawning pi-agent in JSON mode and communicating via stdin/stdout.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
import { spawn } from 'child_process';
|
|
||||||
import { createInterface } from 'readline';
|
|
||||||
|
|
||||||
// Start the agent in JSON mode
|
|
||||||
const agent = spawn('pi-agent', ['--json']);
|
|
||||||
|
|
||||||
// Create readline interface for parsing JSONL output from agent
|
|
||||||
const agentOutput = createInterface({input: agent.stdout, crlfDelay: Infinity});
|
|
||||||
|
|
||||||
// Create readline interface for user input
|
|
||||||
const userInput = createInterface({input: process.stdin, output: process.stdout});
|
|
||||||
|
|
||||||
// State tracking
|
|
||||||
let isProcessing = false, lastUsage, isExiting = false;
|
|
||||||
|
|
||||||
// Handle each line of JSON output from agent
|
|
||||||
agentOutput.on('line', (line) => {
|
|
||||||
try {
|
|
||||||
const event = JSON.parse(line);
|
|
||||||
|
|
||||||
// Handle all event types
|
|
||||||
switch (event.type) {
|
|
||||||
case 'session_start':
|
|
||||||
console.log(`Session started (${event.model}, ${event.api}, ${event.baseURL})`);
|
|
||||||
console.log('Press CTRL + C to exit');
|
|
||||||
promptUser();
|
|
||||||
break;
|
|
||||||
|
|
||||||
case 'user_message':
|
|
||||||
// Already shown in prompt, skip
|
|
||||||
break;
|
|
||||||
|
|
||||||
case 'assistant_start':
|
|
||||||
isProcessing = true;
|
|
||||||
console.log('\n[assistant]');
|
|
||||||
break;
|
|
||||||
|
|
||||||
case 'thinking':
|
|
||||||
console.log(`[thinking]\n${event.text}\n`);
|
|
||||||
break;
|
|
||||||
|
|
||||||
case 'tool_call':
|
|
||||||
console.log(`[tool] ${event.name}(${event.args.substring(0, 50)})\n`);
|
|
||||||
break;
|
|
||||||
|
|
||||||
case 'tool_result':
|
|
||||||
const lines = event.result.split('\n');
|
|
||||||
const truncated = lines.length - 5 > 0 ? `\n. ... (${lines.length - 5} more lines truncated)` : '';
|
|
||||||
console.log(`[tool result]\n${lines.slice(0, 5).join('\n')}${truncated}\n`);
|
|
||||||
break;
|
|
||||||
|
|
||||||
case 'assistant_message':
|
|
||||||
console.log(event.text.trim());
|
|
||||||
isProcessing = false;
|
|
||||||
promptUser();
|
|
||||||
break;
|
|
||||||
|
|
||||||
case 'token_usage':
|
|
||||||
lastUsage = event;
|
|
||||||
break;
|
|
||||||
|
|
||||||
case 'error':
|
|
||||||
console.error('\n❌ Error:', event.message);
|
|
||||||
isProcessing = false;
|
|
||||||
promptUser();
|
|
||||||
break;
|
|
||||||
|
|
||||||
case 'interrupted':
|
|
||||||
console.log('\n⚠️ Interrupted by user');
|
|
||||||
isProcessing = false;
|
|
||||||
promptUser();
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
} catch (e) {
|
|
||||||
console.error('Failed to parse JSON:', line, e);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Send a message to the agent
|
|
||||||
function sendMessage(content) {
|
|
||||||
agent.stdin.write(`${JSON.stringify({type: 'message', content: content})}\n`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Send interrupt signal
|
|
||||||
function interrupt() {
|
|
||||||
agent.stdin.write(`${JSON.stringify({type: 'interrupt'})}\n`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Prompt for user input
|
|
||||||
function promptUser() {
|
|
||||||
if (isExiting) return;
|
|
||||||
|
|
||||||
if (lastUsage) {
|
|
||||||
console.log(`\nin: ${lastUsage.inputTokens}, out: ${lastUsage.outputTokens}, cache read: ${lastUsage.cacheReadTokens}, cache write: ${lastUsage.cacheWriteTokens}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
userInput.question('\n[user]\n> ', (answer) => {
|
|
||||||
answer = answer.trim();
|
|
||||||
if (answer) {
|
|
||||||
sendMessage(answer);
|
|
||||||
} else {
|
|
||||||
promptUser();
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle Ctrl+C
|
|
||||||
process.on('SIGINT', () => {
|
|
||||||
if (isProcessing) {
|
|
||||||
interrupt();
|
|
||||||
} else {
|
|
||||||
agent.kill();
|
|
||||||
process.exit(0);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Handle agent exit
|
|
||||||
agent.on('close', (code) => {
|
|
||||||
isExiting = true;
|
|
||||||
userInput.close();
|
|
||||||
console.log(`\nAgent exited with code ${code}`);
|
|
||||||
process.exit(code);
|
|
||||||
});
|
|
||||||
|
|
||||||
// Handle errors
|
|
||||||
agent.on('error', (err) => {
|
|
||||||
console.error('Failed to start agent:', err);
|
|
||||||
process.exit(1);
|
|
||||||
});
|
|
||||||
|
|
||||||
// Start the conversation
|
|
||||||
console.log('Pi Agent Interactive Chat');
|
|
||||||
```
|
|
||||||
|
|
||||||
### Usage Examples
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# OpenAI o1/o3 - see thinking content with Responses API
|
|
||||||
pi-agent --api responses --model o1-mini "Explain quantum computing"
|
|
||||||
|
|
||||||
# Groq gpt-oss - reasoning with Chat Completions
|
|
||||||
pi-agent --base-url https://api.groq.com/openai/v1 --api-key $GROQ_API_KEY \
|
|
||||||
--model openai/gpt-oss-120b "Complex math problem"
|
|
||||||
|
|
||||||
# Gemini 2.5 - thinking content automatically configured
|
|
||||||
pi-agent --base-url https://generativelanguage.googleapis.com/v1beta/openai/ \
|
|
||||||
--api-key $GEMINI_API_KEY --model gemini-2.5-flash "Think step by step"
|
|
||||||
|
|
||||||
# OpenRouter - supports various reasoning models
|
|
||||||
pi-agent --base-url https://openrouter.ai/api/v1 --api-key $OPENROUTER_API_KEY \
|
|
||||||
--model "qwen/qwen3-235b-a22b-thinking-2507" "Complex reasoning task"
|
|
||||||
```
|
|
||||||
|
|
||||||
### JSON Mode Events
|
|
||||||
|
|
||||||
When reasoning is active, you'll see:
|
|
||||||
- `reasoning` events with thinking text (when available)
|
|
||||||
- `token_usage` events include `reasoningTokens` field
|
|
||||||
- Console/TUI show reasoning tokens with ⚡ symbol
|
|
||||||
|
|
||||||
## Reasoning
|
|
||||||
|
|
||||||
Pi-agent supports reasoning/thinking tokens for models that provide this capability:
|
|
||||||
|
|
||||||
### Supported Providers
|
|
||||||
|
|
||||||
| Provider | Model | API | Thinking Content | Notes |
|
|
||||||
|----------|-------|-----|------------------|-------|
|
|
||||||
| **OpenAI** | o1, o3 | Responses | ✅ Full | Thinking events + token counts |
|
|
||||||
| | o1, o3 | Chat Completions | ❌ | Token counts only |
|
|
||||||
| | gpt-5 | Both APIs | ❌ | Model limitation (empty summaries) |
|
|
||||||
| **Groq** | gpt-oss | Chat Completions | ✅ Full | Via `reasoning_format: "parsed"` |
|
|
||||||
| | gpt-oss | Responses | ❌ | API doesn't support reasoning.summary |
|
|
||||||
| **Gemini** | 2.5 models | Chat Completions | ✅ Full | Auto-configured via extra_body |
|
|
||||||
| **Anthropic** | Claude | OpenAI Compat | ❌ | Use native API for thinking |
|
|
||||||
| **OpenRouter** | Various | Both APIs | Varies | Depends on underlying model |
|
|
||||||
|
|
||||||
|
|
||||||
### Technical Details
|
|
||||||
|
|
||||||
The agent automatically:
|
|
||||||
- Detects provider from base URL
|
|
||||||
- Tests model reasoning support on first use (cached)
|
|
||||||
- Adjusts request parameters per provider:
|
|
||||||
- OpenAI: `reasoning_effort` (minimal/low)
|
|
||||||
- Groq: `reasoning_format: "parsed"`
|
|
||||||
- Gemini: `extra_body.google.thinking_config`
|
|
||||||
- OpenRouter: `reasoning` object with `effort` field
|
|
||||||
- Parses provider-specific response formats:
|
|
||||||
- Gemini: Extracts from `<thought>` tags
|
|
||||||
- Groq: Uses `message.reasoning` field
|
|
||||||
- OpenRouter: Uses `message.reasoning` field
|
|
||||||
- OpenAI: Uses standard `reasoning` events
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
The agent is built with:
|
|
||||||
- **agent.ts** - Core Agent class and API functions
|
|
||||||
- **cli.ts** - CLI entry point, argument parsing, and JSON mode handler
|
|
||||||
- **args.ts** - Custom typed argument parser
|
|
||||||
- **session-manager.ts** - Session persistence
|
|
||||||
- **tools/** - Tool implementations
|
|
||||||
- **renderers/** - Output formatters (console, TUI, JSON)
|
|
||||||
|
|
||||||
## Development
|
|
||||||
|
|
||||||
### Running from Source
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Run directly with npx tsx - no build needed
|
|
||||||
npx tsx src/cli.ts "What is 2+2?"
|
|
||||||
|
|
||||||
# Interactive TUI mode
|
|
||||||
npx tsx src/cli.ts
|
|
||||||
|
|
||||||
# JSON mode for programmatic use
|
|
||||||
echo '{"type":"message","content":"list files"}' | npx tsx src/cli.ts --json
|
|
||||||
```
|
|
||||||
|
|
||||||
### Testing
|
|
||||||
|
|
||||||
The agent supports three testing modes:
|
|
||||||
|
|
||||||
#### 1. Test UI/Renderers (non-interactive mode)
|
|
||||||
```bash
|
|
||||||
# Test console renderer output and metrics
|
|
||||||
npx tsx src/cli.ts "list files in /tmp" 2>&1 | tail -5
|
|
||||||
# Verify: ↑609 ↓610 ⚒ 1 (tokens and tool count)
|
|
||||||
|
|
||||||
# Test TUI renderer (with stdin)
|
|
||||||
echo "list files" | npx tsx src/cli.ts 2>&1 | grep "⚒"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. Test Model Behavior (JSON mode)
|
|
||||||
```bash
|
|
||||||
# Extract metrics for model comparison
|
|
||||||
echo '{"type":"message","content":"write fibonacci in Python"}' | \
|
|
||||||
npx tsx src/cli.ts --json --model gpt-4o-mini 2>&1 | \
|
|
||||||
jq -s '[.[] | select(.type=="token_usage")] | last'
|
|
||||||
|
|
||||||
# Compare models: tokens used, tool calls made, quality
|
|
||||||
for model in "gpt-4o-mini" "gpt-4o"; do
|
|
||||||
echo "Testing $model:"
|
|
||||||
echo '{"type":"message","content":"fix syntax errors in: prnt(hello)"}' | \
|
|
||||||
npx tsx src/cli.ts --json --model $model 2>&1 | \
|
|
||||||
jq -r 'select(.type=="token_usage" or .type=="tool_call" or .type=="assistant_message")'
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. LLM-as-Judge Testing
|
|
||||||
```bash
|
|
||||||
# Capture output from different models and evaluate with another LLM
|
|
||||||
TASK="write a Python function to check if a number is prime"
|
|
||||||
|
|
||||||
# Get response from model A
|
|
||||||
RESPONSE_A=$(echo "{\"type\":\"message\",\"content\":\"$TASK\"}" | \
|
|
||||||
npx tsx src/cli.ts --json --model gpt-4o-mini 2>&1 | \
|
|
||||||
jq -r '.[] | select(.type=="assistant_message") | .text')
|
|
||||||
|
|
||||||
# Judge the response
|
|
||||||
echo "{\"type\":\"message\",\"content\":\"Rate this code (1-10): $RESPONSE_A\"}" | \
|
|
||||||
npx tsx src/cli.ts --json --model gpt-4o 2>&1 | \
|
|
||||||
jq -r '.[] | select(.type=="assistant_message") | .text'
|
|
||||||
```
|
|
||||||
|
|
||||||
## Use as a Library
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { Agent, ConsoleRenderer } from '@mariozechner/pi-agent';
|
|
||||||
|
|
||||||
const agent = new Agent({
|
|
||||||
apiKey: process.env.OPENAI_API_KEY,
|
|
||||||
baseURL: 'https://api.openai.com/v1',
|
|
||||||
model: 'gpt-5-mini',
|
|
||||||
api: 'completions',
|
|
||||||
systemPrompt: 'You are a helpful assistant.'
|
|
||||||
}, new ConsoleRenderer());
|
|
||||||
|
|
||||||
await agent.ask('What is 2+2?');
|
|
||||||
```
|
|
||||||
1343
packages/agent-old/package-lock.json
generated
1343
packages/agent-old/package-lock.json
generated
File diff suppressed because it is too large
Load diff
|
|
@ -1,47 +0,0 @@
|
||||||
{
|
|
||||||
"name": "@mariozechner/pi-agent-old",
|
|
||||||
"version": "0.5.48",
|
|
||||||
"description": "General-purpose agent with tool calling and session persistence",
|
|
||||||
"type": "module",
|
|
||||||
"bin": {
|
|
||||||
"pi-agent": "dist/cli.js"
|
|
||||||
},
|
|
||||||
"main": "./dist/index.js",
|
|
||||||
"types": "./dist/index.d.ts",
|
|
||||||
"files": [
|
|
||||||
"dist"
|
|
||||||
],
|
|
||||||
"scripts": {
|
|
||||||
"clean": "rm -rf dist",
|
|
||||||
"build": "tsc -p tsconfig.build.json && chmod +x dist/cli.js",
|
|
||||||
"check": "biome check --write .",
|
|
||||||
"prepublishOnly": "npm run clean && npm run build"
|
|
||||||
},
|
|
||||||
"dependencies": {
|
|
||||||
"@mariozechner/pi-tui": "^0.5.44",
|
|
||||||
"@types/glob": "^8.1.0",
|
|
||||||
"chalk": "^5.5.0",
|
|
||||||
"glob": "^11.0.3",
|
|
||||||
"openai": "^5.12.2"
|
|
||||||
},
|
|
||||||
"devDependencies": {},
|
|
||||||
"keywords": [
|
|
||||||
"agent",
|
|
||||||
"ai",
|
|
||||||
"llm",
|
|
||||||
"openai",
|
|
||||||
"claude",
|
|
||||||
"cli",
|
|
||||||
"tui"
|
|
||||||
],
|
|
||||||
"author": "Mario Zechner",
|
|
||||||
"license": "MIT",
|
|
||||||
"repository": {
|
|
||||||
"type": "git",
|
|
||||||
"url": "git+https://github.com/badlogic/pi-mono.git",
|
|
||||||
"directory": "packages/agent"
|
|
||||||
},
|
|
||||||
"engines": {
|
|
||||||
"node": ">=20.0.0"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,741 +0,0 @@
|
||||||
import OpenAI from "openai";
|
|
||||||
import type { ResponseFunctionToolCallOutputItem } from "openai/resources/responses/responses.mjs";
|
|
||||||
import type { SessionManager } from "./session-manager.js";
|
|
||||||
import { executeTool, toolsForChat, toolsForResponses } from "./tools/tools.js";
|
|
||||||
|
|
||||||
export type AgentEvent =
|
|
||||||
| { type: "session_start"; sessionId: string; model: string; api: string; baseURL: string; systemPrompt: string }
|
|
||||||
| { type: "assistant_start" }
|
|
||||||
| { type: "reasoning"; text: string }
|
|
||||||
| { type: "tool_call"; toolCallId: string; name: string; args: string }
|
|
||||||
| { type: "tool_result"; toolCallId: string; result: string; isError: boolean }
|
|
||||||
| { type: "assistant_message"; text: string }
|
|
||||||
| { type: "error"; message: string }
|
|
||||||
| { type: "user_message"; text: string }
|
|
||||||
| { type: "interrupted" }
|
|
||||||
| {
|
|
||||||
type: "token_usage";
|
|
||||||
inputTokens: number;
|
|
||||||
outputTokens: number;
|
|
||||||
totalTokens: number;
|
|
||||||
cacheReadTokens: number;
|
|
||||||
cacheWriteTokens: number;
|
|
||||||
reasoningTokens: number;
|
|
||||||
};
|
|
||||||
|
|
||||||
export interface AgentEventReceiver {
|
|
||||||
on(event: AgentEvent): Promise<void>;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface AgentConfig {
|
|
||||||
apiKey: string;
|
|
||||||
baseURL: string;
|
|
||||||
model: string;
|
|
||||||
api: "completions" | "responses";
|
|
||||||
systemPrompt: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface ToolCall {
|
|
||||||
name: string;
|
|
||||||
arguments: string;
|
|
||||||
id: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cache for model reasoning support detection per API type
|
|
||||||
const modelReasoningSupport = new Map<string, { completions?: boolean; responses?: boolean }>();
|
|
||||||
|
|
||||||
// Provider detection based on base URL
|
|
||||||
function detectProvider(baseURL?: string): "openai" | "gemini" | "groq" | "anthropic" | "openrouter" | "other" {
|
|
||||||
if (!baseURL) return "openai";
|
|
||||||
if (baseURL.includes("api.openai.com")) return "openai";
|
|
||||||
if (baseURL.includes("generativelanguage.googleapis.com")) return "gemini";
|
|
||||||
if (baseURL.includes("api.groq.com")) return "groq";
|
|
||||||
if (baseURL.includes("api.anthropic.com")) return "anthropic";
|
|
||||||
if (baseURL.includes("openrouter.ai")) return "openrouter";
|
|
||||||
return "other";
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse provider-specific reasoning from message content
|
|
||||||
function parseReasoningFromMessage(message: any, baseURL?: string): { cleanContent: string; reasoningTexts: string[] } {
|
|
||||||
const provider = detectProvider(baseURL);
|
|
||||||
const reasoningTexts: string[] = [];
|
|
||||||
let cleanContent = message.content || "";
|
|
||||||
|
|
||||||
switch (provider) {
|
|
||||||
case "gemini":
|
|
||||||
// Gemini returns thinking in <thought> tags
|
|
||||||
if (cleanContent.includes("<thought>")) {
|
|
||||||
const thoughtMatches = cleanContent.matchAll(/<thought>([\s\S]*?)<\/thought>/g);
|
|
||||||
for (const match of thoughtMatches) {
|
|
||||||
reasoningTexts.push(match[1].trim());
|
|
||||||
}
|
|
||||||
// Remove all thought tags from the response
|
|
||||||
cleanContent = cleanContent.replace(/<thought>[\s\S]*?<\/thought>/g, "").trim();
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "groq":
|
|
||||||
// Groq returns reasoning in a separate field when reasoning_format is "parsed"
|
|
||||||
if (message.reasoning) {
|
|
||||||
reasoningTexts.push(message.reasoning);
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "openrouter":
|
|
||||||
// OpenRouter returns reasoning in message.reasoning field
|
|
||||||
if (message.reasoning) {
|
|
||||||
reasoningTexts.push(message.reasoning);
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
|
|
||||||
default:
|
|
||||||
// Other providers don't embed reasoning in message content
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
return { cleanContent, reasoningTexts };
|
|
||||||
}
|
|
||||||
|
|
||||||
// Adjust request options based on provider-specific requirements
|
|
||||||
function adjustRequestForProvider(
|
|
||||||
requestOptions: any,
|
|
||||||
api: "completions" | "responses",
|
|
||||||
baseURL?: string,
|
|
||||||
supportsReasoning?: boolean,
|
|
||||||
): any {
|
|
||||||
const provider = detectProvider(baseURL);
|
|
||||||
|
|
||||||
// Handle provider-specific adjustments
|
|
||||||
switch (provider) {
|
|
||||||
case "gemini":
|
|
||||||
if (api === "completions" && supportsReasoning && requestOptions.reasoning_effort) {
|
|
||||||
// Gemini needs extra_body for thinking content
|
|
||||||
// Can't use both reasoning_effort and thinking_config
|
|
||||||
const budget =
|
|
||||||
requestOptions.reasoning_effort === "low"
|
|
||||||
? 1024
|
|
||||||
: requestOptions.reasoning_effort === "medium"
|
|
||||||
? 8192
|
|
||||||
: 24576;
|
|
||||||
|
|
||||||
requestOptions.extra_body = {
|
|
||||||
google: {
|
|
||||||
thinking_config: {
|
|
||||||
thinking_budget: budget,
|
|
||||||
include_thoughts: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
// Remove reasoning_effort when using thinking_config
|
|
||||||
delete requestOptions.reasoning_effort;
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "groq":
|
|
||||||
if (api === "responses" && requestOptions.reasoning) {
|
|
||||||
// Groq responses API doesn't support reasoning.summary
|
|
||||||
delete requestOptions.reasoning.summary;
|
|
||||||
} else if (api === "completions" && supportsReasoning && requestOptions.reasoning_effort) {
|
|
||||||
// Groq Chat Completions uses reasoning_format instead of reasoning_effort alone
|
|
||||||
requestOptions.reasoning_format = "parsed";
|
|
||||||
// Keep reasoning_effort for Groq
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "anthropic":
|
|
||||||
// Anthropic's OpenAI compatibility has its own quirks
|
|
||||||
// But thinking content isn't available via OpenAI compat layer
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "openrouter":
|
|
||||||
// OpenRouter uses a unified reasoning parameter format
|
|
||||||
if (api === "completions" && supportsReasoning && requestOptions.reasoning_effort) {
|
|
||||||
// Convert reasoning_effort to OpenRouter's reasoning format
|
|
||||||
requestOptions.reasoning = {
|
|
||||||
effort:
|
|
||||||
requestOptions.reasoning_effort === "low"
|
|
||||||
? "low"
|
|
||||||
: requestOptions.reasoning_effort === "minimal"
|
|
||||||
? "low"
|
|
||||||
: requestOptions.reasoning_effort === "medium"
|
|
||||||
? "medium"
|
|
||||||
: "high",
|
|
||||||
};
|
|
||||||
delete requestOptions.reasoning_effort;
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
|
|
||||||
default:
|
|
||||||
// OpenAI and others use standard format
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
return requestOptions;
|
|
||||||
}
|
|
||||||
|
|
||||||
async function checkReasoningSupport(
|
|
||||||
client: OpenAI,
|
|
||||||
model: string,
|
|
||||||
api: "completions" | "responses",
|
|
||||||
baseURL?: string,
|
|
||||||
signal?: AbortSignal,
|
|
||||||
): Promise<boolean> {
|
|
||||||
// Check if already aborted
|
|
||||||
if (signal?.aborted) {
|
|
||||||
throw new Error("Interrupted");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check cache first
|
|
||||||
const cacheKey = model;
|
|
||||||
const cached = modelReasoningSupport.get(cacheKey);
|
|
||||||
if (cached && cached[api] !== undefined) {
|
|
||||||
return cached[api]!;
|
|
||||||
}
|
|
||||||
|
|
||||||
let supportsReasoning = false;
|
|
||||||
const provider = detectProvider(baseURL);
|
|
||||||
|
|
||||||
if (api === "responses") {
|
|
||||||
// Try a minimal request with reasoning parameter for Responses API
|
|
||||||
try {
|
|
||||||
const testRequest: any = {
|
|
||||||
model,
|
|
||||||
input: "test",
|
|
||||||
max_output_tokens: 1024,
|
|
||||||
reasoning: {
|
|
||||||
effort: "low", // Use low instead of minimal to ensure we get summaries
|
|
||||||
},
|
|
||||||
};
|
|
||||||
await client.responses.create(testRequest, { signal });
|
|
||||||
supportsReasoning = true;
|
|
||||||
} catch (error) {
|
|
||||||
supportsReasoning = false;
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// For Chat Completions API, try with reasoning parameter
|
|
||||||
try {
|
|
||||||
const testRequest: any = {
|
|
||||||
model,
|
|
||||||
messages: [{ role: "user", content: "test" }],
|
|
||||||
max_completion_tokens: 1024,
|
|
||||||
};
|
|
||||||
|
|
||||||
// Add provider-specific reasoning parameters
|
|
||||||
if (provider === "gemini") {
|
|
||||||
// Gemini uses extra_body for thinking
|
|
||||||
testRequest.extra_body = {
|
|
||||||
google: {
|
|
||||||
thinking_config: {
|
|
||||||
thinking_budget: 100, // Minimum viable budget for test
|
|
||||||
include_thoughts: true,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
} else if (provider === "groq") {
|
|
||||||
// Groq uses both reasoning_format and reasoning_effort
|
|
||||||
testRequest.reasoning_format = "parsed";
|
|
||||||
testRequest.reasoning_effort = "low";
|
|
||||||
} else {
|
|
||||||
// Others use reasoning_effort
|
|
||||||
testRequest.reasoning_effort = "minimal";
|
|
||||||
}
|
|
||||||
|
|
||||||
await client.chat.completions.create(testRequest, { signal });
|
|
||||||
supportsReasoning = true;
|
|
||||||
} catch (error) {
|
|
||||||
supportsReasoning = false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update cache
|
|
||||||
const existing = modelReasoningSupport.get(cacheKey) || {};
|
|
||||||
existing[api] = supportsReasoning;
|
|
||||||
modelReasoningSupport.set(cacheKey, existing);
|
|
||||||
|
|
||||||
return supportsReasoning;
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function callModelResponsesApi(
|
|
||||||
client: OpenAI,
|
|
||||||
model: string,
|
|
||||||
messages: any[],
|
|
||||||
signal?: AbortSignal,
|
|
||||||
eventReceiver?: AgentEventReceiver,
|
|
||||||
supportsReasoning?: boolean,
|
|
||||||
baseURL?: string,
|
|
||||||
): Promise<void> {
|
|
||||||
let conversationDone = false;
|
|
||||||
|
|
||||||
while (!conversationDone) {
|
|
||||||
// Check if we've been interrupted
|
|
||||||
if (signal?.aborted) {
|
|
||||||
throw new Error("Interrupted");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build request options
|
|
||||||
let requestOptions: any = {
|
|
||||||
model,
|
|
||||||
input: messages,
|
|
||||||
tools: toolsForResponses as any,
|
|
||||||
tool_choice: "auto",
|
|
||||||
parallel_tool_calls: true,
|
|
||||||
max_output_tokens: 2000, // TODO make configurable
|
|
||||||
...(supportsReasoning && {
|
|
||||||
reasoning: {
|
|
||||||
effort: "minimal", // Use minimal effort for responses API
|
|
||||||
summary: "detailed", // Request detailed reasoning summaries
|
|
||||||
},
|
|
||||||
}),
|
|
||||||
};
|
|
||||||
|
|
||||||
// Apply provider-specific adjustments
|
|
||||||
requestOptions = adjustRequestForProvider(requestOptions, "responses", baseURL, supportsReasoning);
|
|
||||||
|
|
||||||
const response = await client.responses.create(requestOptions, { signal });
|
|
||||||
|
|
||||||
// Report token usage if available (responses API format)
|
|
||||||
if (response.usage) {
|
|
||||||
const usage = response.usage;
|
|
||||||
eventReceiver?.on({
|
|
||||||
type: "token_usage",
|
|
||||||
inputTokens: usage.input_tokens || 0,
|
|
||||||
outputTokens: usage.output_tokens || 0,
|
|
||||||
totalTokens: usage.total_tokens || 0,
|
|
||||||
cacheReadTokens: usage.input_tokens_details?.cached_tokens || 0,
|
|
||||||
cacheWriteTokens: 0, // Not available in API
|
|
||||||
reasoningTokens: usage.output_tokens_details?.reasoning_tokens || 0,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
const output = response.output;
|
|
||||||
if (!output) break;
|
|
||||||
|
|
||||||
for (const item of output) {
|
|
||||||
// gpt-oss vLLM quirk: need to remove type from "message" events
|
|
||||||
if (item.id === "message") {
|
|
||||||
const { type, ...message } = item;
|
|
||||||
messages.push(item);
|
|
||||||
} else {
|
|
||||||
messages.push(item);
|
|
||||||
}
|
|
||||||
|
|
||||||
switch (item.type) {
|
|
||||||
case "reasoning": {
|
|
||||||
// Handle both content (o1/o3) and summary (gpt-5) formats
|
|
||||||
const reasoningItems = item.content || item.summary || [];
|
|
||||||
for (const content of reasoningItems) {
|
|
||||||
if (content.type === "reasoning_text" || content.type === "summary_text") {
|
|
||||||
await eventReceiver?.on({ type: "reasoning", text: content.text });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
case "message": {
|
|
||||||
for (const content of item.content || []) {
|
|
||||||
if (content.type === "output_text") {
|
|
||||||
await eventReceiver?.on({ type: "assistant_message", text: content.text });
|
|
||||||
} else if (content.type === "refusal") {
|
|
||||||
await eventReceiver?.on({ type: "error", message: `Refusal: ${content.refusal}` });
|
|
||||||
}
|
|
||||||
conversationDone = true;
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
case "function_call": {
|
|
||||||
if (signal?.aborted) {
|
|
||||||
throw new Error("Interrupted");
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
await eventReceiver?.on({
|
|
||||||
type: "tool_call",
|
|
||||||
toolCallId: item.call_id || "",
|
|
||||||
name: item.name,
|
|
||||||
args: item.arguments,
|
|
||||||
});
|
|
||||||
const result = await executeTool(item.name, item.arguments, signal);
|
|
||||||
await eventReceiver?.on({
|
|
||||||
type: "tool_result",
|
|
||||||
toolCallId: item.call_id || "",
|
|
||||||
result,
|
|
||||||
isError: false,
|
|
||||||
});
|
|
||||||
|
|
||||||
// Add tool result to messages
|
|
||||||
const toolResultMsg = {
|
|
||||||
type: "function_call_output",
|
|
||||||
call_id: item.call_id,
|
|
||||||
output: result,
|
|
||||||
} as ResponseFunctionToolCallOutputItem;
|
|
||||||
messages.push(toolResultMsg);
|
|
||||||
} catch (e: any) {
|
|
||||||
await eventReceiver?.on({
|
|
||||||
type: "tool_result",
|
|
||||||
toolCallId: item.call_id || "",
|
|
||||||
result: e.message,
|
|
||||||
isError: true,
|
|
||||||
});
|
|
||||||
const errorMsg = {
|
|
||||||
type: "function_call_output",
|
|
||||||
call_id: item.id,
|
|
||||||
output: e.message,
|
|
||||||
isError: true,
|
|
||||||
};
|
|
||||||
messages.push(errorMsg);
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
default: {
|
|
||||||
eventReceiver?.on({ type: "error", message: `Unknown output type in LLM response: ${item.type}` });
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function callModelChatCompletionsApi(
|
|
||||||
client: OpenAI,
|
|
||||||
model: string,
|
|
||||||
messages: any[],
|
|
||||||
signal?: AbortSignal,
|
|
||||||
eventReceiver?: AgentEventReceiver,
|
|
||||||
supportsReasoning?: boolean,
|
|
||||||
baseURL?: string,
|
|
||||||
): Promise<void> {
|
|
||||||
let assistantResponded = false;
|
|
||||||
|
|
||||||
while (!assistantResponded) {
|
|
||||||
if (signal?.aborted) {
|
|
||||||
throw new Error("Interrupted");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build request options
|
|
||||||
let requestOptions: any = {
|
|
||||||
model,
|
|
||||||
messages,
|
|
||||||
tools: toolsForChat,
|
|
||||||
tool_choice: "auto",
|
|
||||||
max_completion_tokens: 2000, // TODO make configurable
|
|
||||||
...(supportsReasoning && {
|
|
||||||
reasoning_effort: "low", // Use low effort for completions API
|
|
||||||
}),
|
|
||||||
};
|
|
||||||
|
|
||||||
// Apply provider-specific adjustments
|
|
||||||
requestOptions = adjustRequestForProvider(requestOptions, "completions", baseURL, supportsReasoning);
|
|
||||||
|
|
||||||
const response = await client.chat.completions.create(requestOptions, { signal });
|
|
||||||
|
|
||||||
const message = response.choices[0].message;
|
|
||||||
|
|
||||||
// Report token usage if available
|
|
||||||
if (response.usage) {
|
|
||||||
const usage = response.usage;
|
|
||||||
await eventReceiver?.on({
|
|
||||||
type: "token_usage",
|
|
||||||
inputTokens: usage.prompt_tokens || 0,
|
|
||||||
outputTokens: usage.completion_tokens || 0,
|
|
||||||
totalTokens: usage.total_tokens || 0,
|
|
||||||
cacheReadTokens: usage.prompt_tokens_details?.cached_tokens || 0,
|
|
||||||
cacheWriteTokens: 0, // Not available in API
|
|
||||||
reasoningTokens: usage.completion_tokens_details?.reasoning_tokens || 0,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if (message.tool_calls && message.tool_calls.length > 0) {
|
|
||||||
// Add assistant message with tool calls to history
|
|
||||||
const assistantMsg: any = {
|
|
||||||
role: "assistant",
|
|
||||||
content: message.content || null,
|
|
||||||
tool_calls: message.tool_calls,
|
|
||||||
};
|
|
||||||
messages.push(assistantMsg);
|
|
||||||
|
|
||||||
// Display and execute each tool call
|
|
||||||
for (const toolCall of message.tool_calls) {
|
|
||||||
// Check if interrupted before executing tool
|
|
||||||
if (signal?.aborted) {
|
|
||||||
throw new Error("Interrupted");
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
const funcName = toolCall.type === "function" ? toolCall.function.name : toolCall.custom.name;
|
|
||||||
const funcArgs = toolCall.type === "function" ? toolCall.function.arguments : toolCall.custom.input;
|
|
||||||
|
|
||||||
await eventReceiver?.on({ type: "tool_call", toolCallId: toolCall.id, name: funcName, args: funcArgs });
|
|
||||||
const result = await executeTool(funcName, funcArgs, signal);
|
|
||||||
await eventReceiver?.on({ type: "tool_result", toolCallId: toolCall.id, result, isError: false });
|
|
||||||
|
|
||||||
// Add tool result to messages
|
|
||||||
const toolMsg = {
|
|
||||||
role: "tool",
|
|
||||||
tool_call_id: toolCall.id,
|
|
||||||
content: result,
|
|
||||||
};
|
|
||||||
messages.push(toolMsg);
|
|
||||||
} catch (e: any) {
|
|
||||||
eventReceiver?.on({ type: "tool_result", toolCallId: toolCall.id, result: e.message, isError: true });
|
|
||||||
const errorMsg = {
|
|
||||||
role: "tool",
|
|
||||||
tool_call_id: toolCall.id,
|
|
||||||
content: e.message,
|
|
||||||
};
|
|
||||||
messages.push(errorMsg);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else if (message.content) {
|
|
||||||
// Parse provider-specific reasoning from message
|
|
||||||
const { cleanContent, reasoningTexts } = parseReasoningFromMessage(message, baseURL);
|
|
||||||
|
|
||||||
// Emit reasoning events if any
|
|
||||||
for (const reasoning of reasoningTexts) {
|
|
||||||
await eventReceiver?.on({ type: "reasoning", text: reasoning });
|
|
||||||
}
|
|
||||||
|
|
||||||
// Emit the cleaned assistant message
|
|
||||||
await eventReceiver?.on({ type: "assistant_message", text: cleanContent });
|
|
||||||
const finalMsg = { role: "assistant", content: cleanContent };
|
|
||||||
messages.push(finalMsg);
|
|
||||||
assistantResponded = true;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
export class Agent {
|
|
||||||
private client: OpenAI;
|
|
||||||
public readonly config: AgentConfig;
|
|
||||||
private messages: any[] = [];
|
|
||||||
private renderer?: AgentEventReceiver;
|
|
||||||
private sessionManager?: SessionManager;
|
|
||||||
private comboReceiver: AgentEventReceiver;
|
|
||||||
private abortController: AbortController | null = null;
|
|
||||||
private supportsReasoning: boolean | null = null;
|
|
||||||
|
|
||||||
constructor(config: AgentConfig, renderer?: AgentEventReceiver, sessionManager?: SessionManager) {
|
|
||||||
this.config = config;
|
|
||||||
this.client = new OpenAI({
|
|
||||||
apiKey: config.apiKey,
|
|
||||||
baseURL: config.baseURL,
|
|
||||||
});
|
|
||||||
|
|
||||||
// Use provided renderer or default to console
|
|
||||||
this.renderer = renderer;
|
|
||||||
this.sessionManager = sessionManager;
|
|
||||||
|
|
||||||
this.comboReceiver = {
|
|
||||||
on: async (event: AgentEvent): Promise<void> => {
|
|
||||||
await this.renderer?.on(event);
|
|
||||||
await this.sessionManager?.on(event);
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
// Initialize with system prompt if provided
|
|
||||||
if (config.systemPrompt) {
|
|
||||||
this.messages.push({
|
|
||||||
role: "developer",
|
|
||||||
content: config.systemPrompt,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start session logging if we have a session manager
|
|
||||||
if (sessionManager) {
|
|
||||||
sessionManager.startSession(this.config);
|
|
||||||
|
|
||||||
// Emit session_start event
|
|
||||||
this.comboReceiver.on({
|
|
||||||
type: "session_start",
|
|
||||||
sessionId: sessionManager.getSessionId(),
|
|
||||||
model: config.model,
|
|
||||||
api: config.api,
|
|
||||||
baseURL: config.baseURL,
|
|
||||||
systemPrompt: config.systemPrompt,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async ask(userMessage: string): Promise<void> {
|
|
||||||
// Render user message through the event system
|
|
||||||
this.comboReceiver.on({ type: "user_message", text: userMessage });
|
|
||||||
|
|
||||||
// Add user message
|
|
||||||
const userMsg = { role: "user", content: userMessage };
|
|
||||||
this.messages.push(userMsg);
|
|
||||||
|
|
||||||
// Create a new AbortController for this chat session
|
|
||||||
this.abortController = new AbortController();
|
|
||||||
|
|
||||||
try {
|
|
||||||
await this.comboReceiver.on({ type: "assistant_start" });
|
|
||||||
|
|
||||||
// Check reasoning support only once per agent instance
|
|
||||||
if (this.supportsReasoning === null) {
|
|
||||||
this.supportsReasoning = await checkReasoningSupport(
|
|
||||||
this.client,
|
|
||||||
this.config.model,
|
|
||||||
this.config.api,
|
|
||||||
this.config.baseURL,
|
|
||||||
this.abortController.signal,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (this.config.api === "responses") {
|
|
||||||
await callModelResponsesApi(
|
|
||||||
this.client,
|
|
||||||
this.config.model,
|
|
||||||
this.messages,
|
|
||||||
this.abortController.signal,
|
|
||||||
this.comboReceiver,
|
|
||||||
this.supportsReasoning,
|
|
||||||
this.config.baseURL,
|
|
||||||
);
|
|
||||||
} else {
|
|
||||||
await callModelChatCompletionsApi(
|
|
||||||
this.client,
|
|
||||||
this.config.model,
|
|
||||||
this.messages,
|
|
||||||
this.abortController.signal,
|
|
||||||
this.comboReceiver,
|
|
||||||
this.supportsReasoning,
|
|
||||||
this.config.baseURL,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
} catch (e) {
|
|
||||||
// Check if this was an interruption by checking the abort signal
|
|
||||||
if (this.abortController.signal.aborted) {
|
|
||||||
// Emit interrupted event so UI can clean up properly
|
|
||||||
await this.comboReceiver?.on({ type: "interrupted" });
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
throw e;
|
|
||||||
} finally {
|
|
||||||
this.abortController = null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
interrupt(): void {
|
|
||||||
this.abortController?.abort();
|
|
||||||
}
|
|
||||||
|
|
||||||
setEvents(events: AgentEvent[]): void {
|
|
||||||
// Reconstruct messages from events based on API type
|
|
||||||
this.messages = [];
|
|
||||||
|
|
||||||
if (this.config.api === "responses") {
|
|
||||||
// Responses API format
|
|
||||||
if (this.config.systemPrompt) {
|
|
||||||
this.messages.push({
|
|
||||||
role: "developer",
|
|
||||||
content: this.config.systemPrompt,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
for (const event of events) {
|
|
||||||
switch (event.type) {
|
|
||||||
case "user_message":
|
|
||||||
this.messages.push({
|
|
||||||
role: "user",
|
|
||||||
content: [{ type: "input_text", text: event.text }],
|
|
||||||
});
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "reasoning":
|
|
||||||
// Add reasoning message
|
|
||||||
this.messages.push({
|
|
||||||
type: "reasoning",
|
|
||||||
content: [{ type: "reasoning_text", text: event.text }],
|
|
||||||
});
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "tool_call":
|
|
||||||
// Add function call
|
|
||||||
this.messages.push({
|
|
||||||
type: "function_call",
|
|
||||||
id: event.toolCallId,
|
|
||||||
name: event.name,
|
|
||||||
arguments: event.args,
|
|
||||||
});
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "tool_result":
|
|
||||||
// Add function result
|
|
||||||
this.messages.push({
|
|
||||||
type: "function_call_output",
|
|
||||||
call_id: event.toolCallId,
|
|
||||||
output: event.result,
|
|
||||||
});
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "assistant_message":
|
|
||||||
// Add final message
|
|
||||||
this.messages.push({
|
|
||||||
type: "message",
|
|
||||||
content: [{ type: "output_text", text: event.text }],
|
|
||||||
});
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Chat Completions API format
|
|
||||||
if (this.config.systemPrompt) {
|
|
||||||
this.messages.push({ role: "system", content: this.config.systemPrompt });
|
|
||||||
}
|
|
||||||
|
|
||||||
// Track tool calls in progress
|
|
||||||
let pendingToolCalls: any[] = [];
|
|
||||||
|
|
||||||
for (const event of events) {
|
|
||||||
switch (event.type) {
|
|
||||||
case "user_message":
|
|
||||||
this.messages.push({ role: "user", content: event.text });
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "assistant_start":
|
|
||||||
// Reset pending tool calls for new assistant response
|
|
||||||
pendingToolCalls = [];
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "tool_call":
|
|
||||||
// Accumulate tool calls
|
|
||||||
pendingToolCalls.push({
|
|
||||||
id: event.toolCallId,
|
|
||||||
type: "function",
|
|
||||||
function: {
|
|
||||||
name: event.name,
|
|
||||||
arguments: event.args,
|
|
||||||
},
|
|
||||||
});
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "tool_result":
|
|
||||||
// When we see the first tool result, add the assistant message with all tool calls
|
|
||||||
if (pendingToolCalls.length > 0) {
|
|
||||||
this.messages.push({
|
|
||||||
role: "assistant",
|
|
||||||
content: null,
|
|
||||||
tool_calls: pendingToolCalls,
|
|
||||||
});
|
|
||||||
pendingToolCalls = [];
|
|
||||||
}
|
|
||||||
// Add the tool result
|
|
||||||
this.messages.push({
|
|
||||||
role: "tool",
|
|
||||||
tool_call_id: event.toolCallId,
|
|
||||||
content: event.result,
|
|
||||||
});
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "assistant_message":
|
|
||||||
// Final assistant response (no tool calls)
|
|
||||||
this.messages.push({ role: "assistant", content: event.text });
|
|
||||||
break;
|
|
||||||
|
|
||||||
// Skip other event types (thinking, error, interrupted, token_usage)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,204 +0,0 @@
|
||||||
import { homedir } from "os";
|
|
||||||
import { resolve } from "path";
|
|
||||||
|
|
||||||
export type Choice<T = string> = {
|
|
||||||
value: T;
|
|
||||||
description?: string;
|
|
||||||
};
|
|
||||||
|
|
||||||
export type ArgDef = {
|
|
||||||
type: "flag" | "boolean" | "int" | "float" | "string" | "file";
|
|
||||||
alias?: string;
|
|
||||||
default?: any;
|
|
||||||
description?: string;
|
|
||||||
choices?: Choice[] | string[]; // Can be simple strings or objects with descriptions
|
|
||||||
showDefault?: boolean | string; // false to hide, true to show value, string to show custom text
|
|
||||||
};
|
|
||||||
|
|
||||||
export type ArgDefs = Record<string, ArgDef>;
|
|
||||||
|
|
||||||
export type ParsedArgs<T extends ArgDefs> = {
|
|
||||||
[K in keyof T]: T[K]["type"] extends "flag"
|
|
||||||
? boolean
|
|
||||||
: T[K]["type"] extends "boolean"
|
|
||||||
? boolean
|
|
||||||
: T[K]["type"] extends "int"
|
|
||||||
? number
|
|
||||||
: T[K]["type"] extends "float"
|
|
||||||
? number
|
|
||||||
: T[K]["type"] extends "string"
|
|
||||||
? string
|
|
||||||
: T[K]["type"] extends "file"
|
|
||||||
? string
|
|
||||||
: never;
|
|
||||||
} & {
|
|
||||||
_: string[]; // Positional arguments
|
|
||||||
};
|
|
||||||
|
|
||||||
export function parseArgs<T extends ArgDefs>(defs: T, args: string[]): ParsedArgs<T> {
|
|
||||||
const result: any = { _: [] };
|
|
||||||
const aliasMap: Record<string, string> = {};
|
|
||||||
|
|
||||||
// Build alias map and set defaults
|
|
||||||
for (const [key, def] of Object.entries(defs)) {
|
|
||||||
if (def.alias) {
|
|
||||||
aliasMap[def.alias] = key;
|
|
||||||
}
|
|
||||||
if (def.default !== undefined) {
|
|
||||||
result[key] = def.default;
|
|
||||||
} else if (def.type === "flag" || def.type === "boolean") {
|
|
||||||
result[key] = false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse arguments
|
|
||||||
for (let i = 0; i < args.length; i++) {
|
|
||||||
const arg = args[i];
|
|
||||||
|
|
||||||
// Check if it's a flag
|
|
||||||
if (arg.startsWith("--")) {
|
|
||||||
const flagName = arg.slice(2);
|
|
||||||
const key = aliasMap[flagName] || flagName;
|
|
||||||
const def = defs[key];
|
|
||||||
|
|
||||||
if (!def) {
|
|
||||||
// Unknown flag, add to positional args
|
|
||||||
result._.push(arg);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (def.type === "flag") {
|
|
||||||
// Simple on/off flag
|
|
||||||
result[key] = true;
|
|
||||||
} else if (i + 1 < args.length) {
|
|
||||||
// Flag with value
|
|
||||||
const value = args[++i];
|
|
||||||
|
|
||||||
let parsedValue: any;
|
|
||||||
|
|
||||||
switch (def.type) {
|
|
||||||
case "boolean":
|
|
||||||
parsedValue = value === "true" || value === "1" || value === "yes";
|
|
||||||
break;
|
|
||||||
case "int":
|
|
||||||
parsedValue = parseInt(value, 10);
|
|
||||||
if (Number.isNaN(parsedValue)) {
|
|
||||||
throw new Error(`Invalid integer value for --${key}: ${value}`);
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
case "float":
|
|
||||||
parsedValue = parseFloat(value);
|
|
||||||
if (Number.isNaN(parsedValue)) {
|
|
||||||
throw new Error(`Invalid float value for --${key}: ${value}`);
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
case "string":
|
|
||||||
parsedValue = value;
|
|
||||||
break;
|
|
||||||
case "file": {
|
|
||||||
// Resolve ~ to home directory and make absolute
|
|
||||||
let path = value;
|
|
||||||
if (path.startsWith("~")) {
|
|
||||||
path = path.replace("~", homedir());
|
|
||||||
}
|
|
||||||
parsedValue = resolve(path);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate against choices if specified
|
|
||||||
if (def.choices) {
|
|
||||||
const validValues = def.choices.map((c) => (typeof c === "string" ? c : c.value));
|
|
||||||
if (!validValues.includes(parsedValue)) {
|
|
||||||
throw new Error(
|
|
||||||
`Invalid value for --${key}: "${parsedValue}". Valid choices: ${validValues.join(", ")}`,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
result[key] = parsedValue;
|
|
||||||
} else {
|
|
||||||
throw new Error(`Flag --${key} requires a value`);
|
|
||||||
}
|
|
||||||
} else if (arg.startsWith("-") && arg.length === 2) {
|
|
||||||
// Short flag like -h
|
|
||||||
const flagChar = arg[1];
|
|
||||||
const key = aliasMap[flagChar] || flagChar;
|
|
||||||
const def = defs[key];
|
|
||||||
|
|
||||||
if (!def) {
|
|
||||||
result._.push(arg);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (def.type === "flag") {
|
|
||||||
result[key] = true;
|
|
||||||
} else {
|
|
||||||
throw new Error(`Short flag -${flagChar} cannot have a value`);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Positional argument
|
|
||||||
result._.push(arg);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return result as ParsedArgs<T>;
|
|
||||||
}
|
|
||||||
|
|
||||||
export function printHelp<T extends ArgDefs>(defs: T, usage: string): void {
|
|
||||||
console.log(usage);
|
|
||||||
console.log("\nOptions:");
|
|
||||||
|
|
||||||
for (const [key, def] of Object.entries(defs)) {
|
|
||||||
let line = ` --${key}`;
|
|
||||||
if (def.alias) {
|
|
||||||
line += `, -${def.alias}`;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (def.type !== "flag") {
|
|
||||||
if (def.choices) {
|
|
||||||
// Show choices instead of type
|
|
||||||
const simpleChoices = def.choices.filter((c) => typeof c === "string");
|
|
||||||
if (simpleChoices.length === def.choices.length) {
|
|
||||||
// All choices are simple strings
|
|
||||||
line += ` <${simpleChoices.join("|")}>`;
|
|
||||||
} else {
|
|
||||||
// Has descriptions, just show the type
|
|
||||||
const typeStr = def.type === "file" ? "path" : def.type;
|
|
||||||
line += ` <${typeStr}>`;
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
const typeStr = def.type === "file" ? "path" : def.type;
|
|
||||||
line += ` <${typeStr}>`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (def.description) {
|
|
||||||
// Pad to align descriptions
|
|
||||||
line = line.padEnd(30) + def.description;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (def.default !== undefined && def.type !== "flag" && def.showDefault !== false) {
|
|
||||||
if (typeof def.showDefault === "string") {
|
|
||||||
line += ` (default: ${def.showDefault})`;
|
|
||||||
} else {
|
|
||||||
line += ` (default: ${def.default})`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(line);
|
|
||||||
|
|
||||||
// Print choices with descriptions if available
|
|
||||||
if (def.choices) {
|
|
||||||
const hasDescriptions = def.choices.some((c) => typeof c === "object" && c.description);
|
|
||||||
if (hasDescriptions) {
|
|
||||||
for (const choice of def.choices) {
|
|
||||||
if (typeof choice === "object") {
|
|
||||||
const choiceLine = ` ${choice.value}`.padEnd(30) + (choice.description || "");
|
|
||||||
console.log(choiceLine);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,9 +0,0 @@
|
||||||
#!/usr/bin/env node
|
|
||||||
|
|
||||||
import { main } from "./main.js";
|
|
||||||
|
|
||||||
// Run as CLI - this file should always be executed, not imported
|
|
||||||
main(process.argv.slice(2)).catch((err) => {
|
|
||||||
console.error(err);
|
|
||||||
process.exit(1);
|
|
||||||
});
|
|
||||||
|
|
@ -1,15 +0,0 @@
|
||||||
// Main exports for pi-agent package
|
|
||||||
|
|
||||||
export type { AgentConfig, AgentEvent, AgentEventReceiver } from "./agent.js";
|
|
||||||
export { Agent } from "./agent.js";
|
|
||||||
export type { ArgDef, ArgDefs, ParsedArgs } from "./args.js";
|
|
||||||
// CLI utilities
|
|
||||||
export { parseArgs, printHelp } from "./args.js";
|
|
||||||
// CLI main function
|
|
||||||
export { main } from "./main.js";
|
|
||||||
// Renderers
|
|
||||||
export { ConsoleRenderer } from "./renderers/console-renderer.js";
|
|
||||||
export { JsonRenderer } from "./renderers/json-renderer.js";
|
|
||||||
export { TuiRenderer } from "./renderers/tui-renderer.js";
|
|
||||||
export type { SessionData, SessionEvent, SessionHeader } from "./session-manager.js";
|
|
||||||
export { SessionManager } from "./session-manager.js";
|
|
||||||
|
|
@ -1,285 +0,0 @@
|
||||||
import chalk from "chalk";
|
|
||||||
import { createInterface } from "readline";
|
|
||||||
import type { AgentConfig } from "./agent.js";
|
|
||||||
import { Agent } from "./agent.js";
|
|
||||||
import { parseArgs, printHelp as printHelpArgs } from "./args.js";
|
|
||||||
import { ConsoleRenderer } from "./renderers/console-renderer.js";
|
|
||||||
import { JsonRenderer } from "./renderers/json-renderer.js";
|
|
||||||
import { TuiRenderer } from "./renderers/tui-renderer.js";
|
|
||||||
import { SessionManager } from "./session-manager.js";
|
|
||||||
|
|
||||||
// Define argument structure
|
|
||||||
const argDefs = {
|
|
||||||
"base-url": {
|
|
||||||
type: "string" as const,
|
|
||||||
default: "https://api.openai.com/v1",
|
|
||||||
description: "API base URL",
|
|
||||||
},
|
|
||||||
"api-key": {
|
|
||||||
type: "string" as const,
|
|
||||||
default: process.env.OPENAI_API_KEY || "",
|
|
||||||
description: "API key",
|
|
||||||
showDefault: "$OPENAI_API_KEY",
|
|
||||||
},
|
|
||||||
model: {
|
|
||||||
type: "string" as const,
|
|
||||||
default: "gpt-5-mini",
|
|
||||||
description: "Model name",
|
|
||||||
},
|
|
||||||
api: {
|
|
||||||
type: "string" as const,
|
|
||||||
default: "completions",
|
|
||||||
description: "API type",
|
|
||||||
choices: [
|
|
||||||
{ value: "completions", description: "OpenAI Chat Completions API (most models)" },
|
|
||||||
{ value: "responses", description: "OpenAI Responses API (GPT-OSS models)" },
|
|
||||||
],
|
|
||||||
},
|
|
||||||
"system-prompt": {
|
|
||||||
type: "string" as const,
|
|
||||||
default: "You are a helpful assistant.",
|
|
||||||
description: "System prompt",
|
|
||||||
},
|
|
||||||
continue: {
|
|
||||||
type: "flag" as const,
|
|
||||||
alias: "c",
|
|
||||||
description: "Continue previous session",
|
|
||||||
},
|
|
||||||
json: {
|
|
||||||
type: "flag" as const,
|
|
||||||
description: "Output as JSONL",
|
|
||||||
},
|
|
||||||
help: {
|
|
||||||
type: "flag" as const,
|
|
||||||
alias: "h",
|
|
||||||
description: "Show this help message",
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
interface JsonCommand {
|
|
||||||
type: "message" | "interrupt";
|
|
||||||
content?: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
function printHelp(): void {
|
|
||||||
const usage = `Usage: pi-agent [options] [messages...]
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
# Single message (default OpenAI, GPT-5 Mini, OPENAI_API_KEY env var)
|
|
||||||
pi-agent "What is 2+2?"
|
|
||||||
|
|
||||||
# Multiple messages processed sequentially
|
|
||||||
pi-agent "What is 2+2?" "What about 3+3?"
|
|
||||||
|
|
||||||
# Interactive chat mode (no messages = interactive)
|
|
||||||
pi-agent
|
|
||||||
|
|
||||||
# Continue most recently modified session in current directory
|
|
||||||
pi-agent --continue "Follow up question"
|
|
||||||
|
|
||||||
# GPT-OSS via Groq
|
|
||||||
pi-agent --base-url https://api.groq.com/openai/v1 --api-key $GROQ_API_KEY --model openai/gpt-oss-120b
|
|
||||||
|
|
||||||
# GLM 4.5 via OpenRouter
|
|
||||||
pi-agent --base-url https://openrouter.ai/api/v1 --api-key $OPENROUTER_API_KEY --model z-ai/glm-4.5
|
|
||||||
|
|
||||||
# Claude via Anthropic (no prompt caching support - see https://docs.anthropic.com/en/api/openai-sdk)
|
|
||||||
pi-agent --base-url https://api.anthropic.com/v1 --api-key $ANTHROPIC_API_KEY --model claude-opus-4-1-20250805`;
|
|
||||||
printHelpArgs(argDefs, usage);
|
|
||||||
}
|
|
||||||
|
|
||||||
async function runJsonInteractiveMode(config: AgentConfig, sessionManager: SessionManager): Promise<void> {
|
|
||||||
const rl = createInterface({
|
|
||||||
input: process.stdin,
|
|
||||||
output: process.stdout,
|
|
||||||
terminal: false, // Don't interpret control characters
|
|
||||||
});
|
|
||||||
|
|
||||||
const renderer = new JsonRenderer();
|
|
||||||
const agent = new Agent(config, renderer, sessionManager);
|
|
||||||
let isProcessing = false;
|
|
||||||
let pendingMessage: string | null = null;
|
|
||||||
|
|
||||||
const processMessage = async (content: string): Promise<void> => {
|
|
||||||
isProcessing = true;
|
|
||||||
|
|
||||||
try {
|
|
||||||
await agent.ask(content);
|
|
||||||
} catch (e: any) {
|
|
||||||
await renderer.on({ type: "error", message: e.message });
|
|
||||||
} finally {
|
|
||||||
isProcessing = false;
|
|
||||||
|
|
||||||
// Process any pending message
|
|
||||||
if (pendingMessage) {
|
|
||||||
const msg = pendingMessage;
|
|
||||||
pendingMessage = null;
|
|
||||||
await processMessage(msg);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Listen for lines from stdin
|
|
||||||
rl.on("line", (line) => {
|
|
||||||
try {
|
|
||||||
const command = JSON.parse(line) as JsonCommand;
|
|
||||||
|
|
||||||
switch (command.type) {
|
|
||||||
case "interrupt":
|
|
||||||
agent.interrupt();
|
|
||||||
isProcessing = false;
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "message":
|
|
||||||
if (!command.content) {
|
|
||||||
renderer.on({ type: "error", message: "Message content is required" });
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (isProcessing) {
|
|
||||||
// Queue the message for when the agent is done
|
|
||||||
pendingMessage = command.content;
|
|
||||||
} else {
|
|
||||||
processMessage(command.content);
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
|
|
||||||
default:
|
|
||||||
renderer.on({ type: "error", message: `Unknown command type: ${(command as any).type}` });
|
|
||||||
}
|
|
||||||
} catch (e) {
|
|
||||||
renderer.on({ type: "error", message: `Invalid JSON: ${e}` });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Wait for stdin to close
|
|
||||||
await new Promise<void>((resolve) => {
|
|
||||||
rl.on("close", () => {
|
|
||||||
resolve();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
async function runTuiInteractiveMode(agentConfig: AgentConfig, sessionManager: SessionManager): Promise<void> {
|
|
||||||
const sessionData = sessionManager.getSessionData();
|
|
||||||
if (sessionData) {
|
|
||||||
console.log(chalk.dim(`Resuming session with ${sessionData.events.length} events`));
|
|
||||||
}
|
|
||||||
const renderer = new TuiRenderer();
|
|
||||||
|
|
||||||
// Initialize TUI BEFORE creating the agent to prevent double init
|
|
||||||
await renderer.init();
|
|
||||||
|
|
||||||
const agent = new Agent(agentConfig, renderer, sessionManager);
|
|
||||||
renderer.setInterruptCallback(() => {
|
|
||||||
agent.interrupt();
|
|
||||||
});
|
|
||||||
|
|
||||||
if (sessionData) {
|
|
||||||
agent.setEvents(sessionData ? sessionData.events.map((e) => e.event) : []);
|
|
||||||
for (const sessionEvent of sessionData.events) {
|
|
||||||
const event = sessionEvent.event;
|
|
||||||
if (event.type === "assistant_start") {
|
|
||||||
renderer.renderAssistantLabel();
|
|
||||||
} else {
|
|
||||||
await renderer.on(event);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
while (true) {
|
|
||||||
const userInput = await renderer.getUserInput();
|
|
||||||
try {
|
|
||||||
await agent.ask(userInput);
|
|
||||||
} catch (e: any) {
|
|
||||||
await renderer.on({ type: "error", message: e.message });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function runSingleShotMode(
|
|
||||||
agentConfig: AgentConfig,
|
|
||||||
sessionManager: SessionManager,
|
|
||||||
messages: string[],
|
|
||||||
jsonOutput: boolean,
|
|
||||||
): Promise<void> {
|
|
||||||
const sessionData = sessionManager.getSessionData();
|
|
||||||
const renderer = jsonOutput ? new JsonRenderer() : new ConsoleRenderer();
|
|
||||||
const agent = new Agent(agentConfig, renderer, sessionManager);
|
|
||||||
if (sessionData) {
|
|
||||||
if (!jsonOutput) {
|
|
||||||
console.log(chalk.dim(`Resuming session with ${sessionData.events.length} events`));
|
|
||||||
}
|
|
||||||
agent.setEvents(sessionData ? sessionData.events.map((e) => e.event) : []);
|
|
||||||
}
|
|
||||||
|
|
||||||
for (const msg of messages) {
|
|
||||||
try {
|
|
||||||
await agent.ask(msg);
|
|
||||||
} catch (e: any) {
|
|
||||||
await renderer.on({ type: "error", message: e.message });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Main function to use Agent as standalone CLI
|
|
||||||
export async function main(args: string[]): Promise<void> {
|
|
||||||
// Parse arguments
|
|
||||||
const parsed = parseArgs(argDefs, args);
|
|
||||||
|
|
||||||
// Show help if requested
|
|
||||||
if (parsed.help) {
|
|
||||||
printHelp();
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract configuration from parsed args
|
|
||||||
const baseURL = parsed["base-url"];
|
|
||||||
const apiKey = parsed["api-key"];
|
|
||||||
const model = parsed.model;
|
|
||||||
const continueSession = parsed.continue;
|
|
||||||
const api = parsed.api as "completions" | "responses";
|
|
||||||
const systemPrompt = parsed["system-prompt"];
|
|
||||||
const jsonOutput = parsed.json;
|
|
||||||
const messages = parsed._; // Positional arguments
|
|
||||||
|
|
||||||
if (!apiKey) {
|
|
||||||
throw new Error("API key required (use --api-key or set OPENAI_API_KEY)");
|
|
||||||
}
|
|
||||||
|
|
||||||
// Determine mode: interactive if no messages provided
|
|
||||||
const isInteractive = messages.length === 0;
|
|
||||||
|
|
||||||
// Create session manager
|
|
||||||
const sessionManager = new SessionManager(continueSession);
|
|
||||||
|
|
||||||
// Create or restore agent
|
|
||||||
let agentConfig: AgentConfig = {
|
|
||||||
apiKey,
|
|
||||||
baseURL,
|
|
||||||
model,
|
|
||||||
api,
|
|
||||||
systemPrompt,
|
|
||||||
};
|
|
||||||
|
|
||||||
if (continueSession) {
|
|
||||||
const sessionData = sessionManager.getSessionData();
|
|
||||||
if (sessionData) {
|
|
||||||
agentConfig = {
|
|
||||||
...sessionData.config,
|
|
||||||
apiKey, // Allow overriding API key
|
|
||||||
};
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run in appropriate mode
|
|
||||||
if (isInteractive) {
|
|
||||||
if (jsonOutput) {
|
|
||||||
await runJsonInteractiveMode(agentConfig, sessionManager);
|
|
||||||
} else {
|
|
||||||
await runTuiInteractiveMode(agentConfig, sessionManager);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
await runSingleShotMode(agentConfig, sessionManager, messages, jsonOutput);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,176 +0,0 @@
|
||||||
import chalk from "chalk";
|
|
||||||
import type { AgentEvent, AgentEventReceiver } from "../agent.js";
|
|
||||||
|
|
||||||
export class ConsoleRenderer implements AgentEventReceiver {
|
|
||||||
private frames = ["⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"];
|
|
||||||
private currentFrame = 0;
|
|
||||||
private animationInterval: NodeJS.Timeout | null = null;
|
|
||||||
private isAnimating = false;
|
|
||||||
private animationLine = "";
|
|
||||||
private isTTY = process.stdout.isTTY;
|
|
||||||
private toolCallCount = 0;
|
|
||||||
private lastInputTokens = 0;
|
|
||||||
private lastOutputTokens = 0;
|
|
||||||
private lastCacheReadTokens = 0;
|
|
||||||
private lastCacheWriteTokens = 0;
|
|
||||||
private lastReasoningTokens = 0;
|
|
||||||
|
|
||||||
private startAnimation(text: string = "Thinking"): void {
|
|
||||||
if (this.isAnimating || !this.isTTY) return;
|
|
||||||
this.isAnimating = true;
|
|
||||||
this.currentFrame = 0;
|
|
||||||
|
|
||||||
// Write initial frame
|
|
||||||
this.animationLine = `${chalk.cyan(this.frames[this.currentFrame])} ${chalk.dim(text)}`;
|
|
||||||
process.stdout.write(this.animationLine);
|
|
||||||
|
|
||||||
this.animationInterval = setInterval(() => {
|
|
||||||
// Clear current line
|
|
||||||
process.stdout.write(`\r${" ".repeat(this.animationLine.length)}\r`);
|
|
||||||
|
|
||||||
// Update frame
|
|
||||||
this.currentFrame = (this.currentFrame + 1) % this.frames.length;
|
|
||||||
this.animationLine = `${chalk.cyan(this.frames[this.currentFrame])} ${chalk.dim(text)}`;
|
|
||||||
process.stdout.write(this.animationLine);
|
|
||||||
}, 80);
|
|
||||||
}
|
|
||||||
|
|
||||||
private stopAnimation(): void {
|
|
||||||
if (!this.isAnimating) return;
|
|
||||||
|
|
||||||
if (this.animationInterval) {
|
|
||||||
clearInterval(this.animationInterval);
|
|
||||||
this.animationInterval = null;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Clear the animation line
|
|
||||||
process.stdout.write(`\r${" ".repeat(this.animationLine.length)}\r`);
|
|
||||||
this.isAnimating = false;
|
|
||||||
this.animationLine = "";
|
|
||||||
}
|
|
||||||
|
|
||||||
private displayMetrics(): void {
|
|
||||||
// Build metrics display
|
|
||||||
let metricsText = chalk.dim(
|
|
||||||
`↑${this.lastInputTokens.toLocaleString()} ↓${this.lastOutputTokens.toLocaleString()}`,
|
|
||||||
);
|
|
||||||
|
|
||||||
// Add reasoning tokens if present
|
|
||||||
if (this.lastReasoningTokens > 0) {
|
|
||||||
metricsText += chalk.dim(` ⚡${this.lastReasoningTokens.toLocaleString()}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add cache info if available
|
|
||||||
if (this.lastCacheReadTokens > 0 || this.lastCacheWriteTokens > 0) {
|
|
||||||
const cacheText: string[] = [];
|
|
||||||
if (this.lastCacheReadTokens > 0) {
|
|
||||||
cacheText.push(`⟲${this.lastCacheReadTokens.toLocaleString()}`);
|
|
||||||
}
|
|
||||||
if (this.lastCacheWriteTokens > 0) {
|
|
||||||
cacheText.push(`⟳${this.lastCacheWriteTokens.toLocaleString()}`);
|
|
||||||
}
|
|
||||||
metricsText += chalk.dim(` (${cacheText.join(" ")})`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add tool call count
|
|
||||||
if (this.toolCallCount > 0) {
|
|
||||||
metricsText += chalk.dim(` ⚒ ${this.toolCallCount}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(metricsText);
|
|
||||||
console.log();
|
|
||||||
}
|
|
||||||
|
|
||||||
async on(event: AgentEvent): Promise<void> {
|
|
||||||
// Stop animation for any new event except token_usage
|
|
||||||
if (event.type !== "token_usage" && this.isAnimating) {
|
|
||||||
this.stopAnimation();
|
|
||||||
}
|
|
||||||
|
|
||||||
switch (event.type) {
|
|
||||||
case "session_start":
|
|
||||||
console.log(
|
|
||||||
chalk.blue(
|
|
||||||
`[Session started] ID: ${event.sessionId}, Model: ${event.model}, API: ${event.api}, Base URL: ${event.baseURL}`,
|
|
||||||
),
|
|
||||||
);
|
|
||||||
console.log(chalk.dim(`System Prompt: ${event.systemPrompt}\n`));
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "assistant_start":
|
|
||||||
console.log(chalk.hex("#FFA500")("[assistant]"));
|
|
||||||
this.startAnimation();
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "reasoning":
|
|
||||||
this.stopAnimation();
|
|
||||||
console.log(chalk.dim("[thinking]"));
|
|
||||||
console.log(chalk.dim(event.text));
|
|
||||||
console.log();
|
|
||||||
// Resume animation after showing thinking
|
|
||||||
this.startAnimation("Processing");
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "tool_call":
|
|
||||||
this.stopAnimation();
|
|
||||||
this.toolCallCount++;
|
|
||||||
console.log(chalk.yellow(`[tool] ${event.name}(${event.args})`));
|
|
||||||
// Resume animation while tool executes
|
|
||||||
this.startAnimation(`Running ${event.name}`);
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "tool_result": {
|
|
||||||
this.stopAnimation();
|
|
||||||
const lines = event.result.split("\n");
|
|
||||||
const maxLines = 10;
|
|
||||||
const truncated = lines.length > maxLines;
|
|
||||||
const toShow = truncated ? lines.slice(0, maxLines) : lines;
|
|
||||||
|
|
||||||
const text = toShow.join("\n");
|
|
||||||
console.log(event.isError ? chalk.red(text) : chalk.gray(text));
|
|
||||||
|
|
||||||
if (truncated) {
|
|
||||||
console.log(chalk.dim(`... (${lines.length - maxLines} more lines)`));
|
|
||||||
}
|
|
||||||
console.log();
|
|
||||||
// Resume animation after tool result
|
|
||||||
this.startAnimation("Thinking");
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
case "assistant_message":
|
|
||||||
this.stopAnimation();
|
|
||||||
console.log(event.text);
|
|
||||||
console.log();
|
|
||||||
// Display metrics after assistant message
|
|
||||||
this.displayMetrics();
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "error":
|
|
||||||
this.stopAnimation();
|
|
||||||
console.error(chalk.red(`[error] ${event.message}\n`));
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "user_message":
|
|
||||||
console.log(chalk.green("[user]"));
|
|
||||||
console.log(event.text);
|
|
||||||
console.log();
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "interrupted":
|
|
||||||
this.stopAnimation();
|
|
||||||
console.log(chalk.red("[Interrupted by user]\n"));
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "token_usage":
|
|
||||||
// Store token usage for display after assistant message
|
|
||||||
this.lastInputTokens = event.inputTokens;
|
|
||||||
this.lastOutputTokens = event.outputTokens;
|
|
||||||
this.lastCacheReadTokens = event.cacheReadTokens;
|
|
||||||
this.lastCacheWriteTokens = event.cacheWriteTokens;
|
|
||||||
this.lastReasoningTokens = event.reasoningTokens;
|
|
||||||
// Don't stop animation for this event
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,7 +0,0 @@
|
||||||
import type { AgentEvent, AgentEventReceiver } from "../agent.js";
|
|
||||||
|
|
||||||
export class JsonRenderer implements AgentEventReceiver {
|
|
||||||
async on(event: AgentEvent): Promise<void> {
|
|
||||||
console.log(JSON.stringify(event));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,422 +0,0 @@
|
||||||
import {
|
|
||||||
CombinedAutocompleteProvider,
|
|
||||||
Container,
|
|
||||||
MarkdownComponent,
|
|
||||||
TextComponent,
|
|
||||||
TextEditor,
|
|
||||||
TUI,
|
|
||||||
WhitespaceComponent,
|
|
||||||
} from "@mariozechner/pi-tui";
|
|
||||||
import chalk from "chalk";
|
|
||||||
import type { AgentEvent, AgentEventReceiver } from "../agent.js";
|
|
||||||
|
|
||||||
class LoadingAnimation extends TextComponent {
|
|
||||||
private frames = ["⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"];
|
|
||||||
private currentFrame = 0;
|
|
||||||
private intervalId: NodeJS.Timeout | null = null;
|
|
||||||
private ui: TUI | null = null;
|
|
||||||
|
|
||||||
constructor(ui: TUI) {
|
|
||||||
super("", { bottom: 1 });
|
|
||||||
this.ui = ui;
|
|
||||||
this.start();
|
|
||||||
}
|
|
||||||
|
|
||||||
start() {
|
|
||||||
this.updateDisplay();
|
|
||||||
this.intervalId = setInterval(() => {
|
|
||||||
this.currentFrame = (this.currentFrame + 1) % this.frames.length;
|
|
||||||
this.updateDisplay();
|
|
||||||
}, 80);
|
|
||||||
}
|
|
||||||
|
|
||||||
stop() {
|
|
||||||
if (this.intervalId) {
|
|
||||||
clearInterval(this.intervalId);
|
|
||||||
this.intervalId = null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private updateDisplay() {
|
|
||||||
const frame = this.frames[this.currentFrame];
|
|
||||||
this.setText(`${chalk.cyan(frame)} ${chalk.dim("Thinking...")}`);
|
|
||||||
if (this.ui) {
|
|
||||||
this.ui.requestRender();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
export class TuiRenderer implements AgentEventReceiver {
|
|
||||||
private ui: TUI;
|
|
||||||
private chatContainer: Container;
|
|
||||||
private statusContainer: Container;
|
|
||||||
private editor: TextEditor;
|
|
||||||
private tokenContainer: Container;
|
|
||||||
private isInitialized = false;
|
|
||||||
private onInputCallback?: (text: string) => void;
|
|
||||||
private currentLoadingAnimation: LoadingAnimation | null = null;
|
|
||||||
private onInterruptCallback?: () => void;
|
|
||||||
private lastSigintTime = 0;
|
|
||||||
private lastInputTokens = 0;
|
|
||||||
private lastOutputTokens = 0;
|
|
||||||
private lastCacheReadTokens = 0;
|
|
||||||
private lastCacheWriteTokens = 0;
|
|
||||||
private lastReasoningTokens = 0;
|
|
||||||
private toolCallCount = 0;
|
|
||||||
// Cumulative token tracking
|
|
||||||
private cumulativeInputTokens = 0;
|
|
||||||
private cumulativeOutputTokens = 0;
|
|
||||||
private cumulativeCacheReadTokens = 0;
|
|
||||||
private cumulativeCacheWriteTokens = 0;
|
|
||||||
private cumulativeReasoningTokens = 0;
|
|
||||||
private cumulativeToolCallCount = 0;
|
|
||||||
private tokenStatusComponent: TextComponent | null = null;
|
|
||||||
|
|
||||||
constructor() {
|
|
||||||
this.ui = new TUI();
|
|
||||||
this.chatContainer = new Container();
|
|
||||||
this.statusContainer = new Container();
|
|
||||||
this.editor = new TextEditor();
|
|
||||||
this.tokenContainer = new Container();
|
|
||||||
|
|
||||||
// Setup autocomplete for file paths and slash commands
|
|
||||||
const autocompleteProvider = new CombinedAutocompleteProvider(
|
|
||||||
[
|
|
||||||
{
|
|
||||||
name: "tokens",
|
|
||||||
description: "Show cumulative token usage for this session",
|
|
||||||
},
|
|
||||||
],
|
|
||||||
process.cwd(), // Base directory for file path completion
|
|
||||||
);
|
|
||||||
this.editor.setAutocompleteProvider(autocompleteProvider);
|
|
||||||
}
|
|
||||||
|
|
||||||
async init(): Promise<void> {
|
|
||||||
if (this.isInitialized) return;
|
|
||||||
|
|
||||||
// Add header with instructions
|
|
||||||
const header = new TextComponent(
|
|
||||||
chalk.gray(chalk.blueBright(">> pi interactive chat <<<")) +
|
|
||||||
"\n" +
|
|
||||||
chalk.dim("Press Escape to interrupt while processing") +
|
|
||||||
"\n" +
|
|
||||||
chalk.dim("Press CTRL+C to clear the text editor") +
|
|
||||||
"\n" +
|
|
||||||
chalk.dim("Press CTRL+C twice quickly to exit"),
|
|
||||||
{ bottom: 1 },
|
|
||||||
);
|
|
||||||
|
|
||||||
// Setup UI layout
|
|
||||||
this.ui.addChild(header);
|
|
||||||
this.ui.addChild(this.chatContainer);
|
|
||||||
this.ui.addChild(this.statusContainer);
|
|
||||||
this.ui.addChild(new WhitespaceComponent(1));
|
|
||||||
this.ui.addChild(this.editor);
|
|
||||||
this.ui.addChild(this.tokenContainer);
|
|
||||||
this.ui.setFocus(this.editor);
|
|
||||||
|
|
||||||
// Set up global key handler for Escape and Ctrl+C
|
|
||||||
this.ui.onGlobalKeyPress = (data: string): boolean => {
|
|
||||||
// Intercept Escape key when processing
|
|
||||||
if (data === "\x1b" && this.currentLoadingAnimation) {
|
|
||||||
// Call interrupt callback if set
|
|
||||||
if (this.onInterruptCallback) {
|
|
||||||
this.onInterruptCallback();
|
|
||||||
}
|
|
||||||
|
|
||||||
// Don't do any UI cleanup here - let the interrupted event handle it
|
|
||||||
// This avoids race conditions and ensures the interrupted message is shown
|
|
||||||
|
|
||||||
// Don't forward to editor
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle Ctrl+C (raw mode sends \x03)
|
|
||||||
if (data === "\x03") {
|
|
||||||
const now = Date.now();
|
|
||||||
const timeSinceLastCtrlC = now - this.lastSigintTime;
|
|
||||||
|
|
||||||
if (timeSinceLastCtrlC < 500) {
|
|
||||||
// Second Ctrl+C within 500ms - exit
|
|
||||||
this.stop();
|
|
||||||
process.exit(0);
|
|
||||||
} else {
|
|
||||||
// First Ctrl+C - clear the editor
|
|
||||||
this.clearEditor();
|
|
||||||
this.lastSigintTime = now;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Don't forward to editor
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Forward all other keys
|
|
||||||
return true;
|
|
||||||
};
|
|
||||||
|
|
||||||
// Handle editor submission
|
|
||||||
this.editor.onSubmit = (text: string) => {
|
|
||||||
text = text.trim();
|
|
||||||
if (!text) return;
|
|
||||||
|
|
||||||
// Handle slash commands
|
|
||||||
if (text.startsWith("/")) {
|
|
||||||
const [command, ...args] = text.slice(1).split(" ");
|
|
||||||
if (command === "tokens") {
|
|
||||||
this.showTokenUsage();
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
// Unknown slash command, ignore
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (this.onInputCallback) {
|
|
||||||
this.onInputCallback(text);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Start the UI
|
|
||||||
await this.ui.start();
|
|
||||||
this.isInitialized = true;
|
|
||||||
}
|
|
||||||
|
|
||||||
async on(event: AgentEvent): Promise<void> {
|
|
||||||
// Ensure UI is initialized
|
|
||||||
if (!this.isInitialized) {
|
|
||||||
await this.init();
|
|
||||||
}
|
|
||||||
|
|
||||||
switch (event.type) {
|
|
||||||
case "assistant_start":
|
|
||||||
this.chatContainer.addChild(new TextComponent(chalk.hex("#FFA500")("[assistant]")));
|
|
||||||
// Disable editor submission while processing
|
|
||||||
this.editor.disableSubmit = true;
|
|
||||||
// Start loading animation in the status container
|
|
||||||
this.statusContainer.clear();
|
|
||||||
this.currentLoadingAnimation = new LoadingAnimation(this.ui);
|
|
||||||
this.statusContainer.addChild(this.currentLoadingAnimation);
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "reasoning": {
|
|
||||||
// Show thinking in dim text
|
|
||||||
const thinkingContainer = new Container();
|
|
||||||
thinkingContainer.addChild(new TextComponent(chalk.dim("[thinking]")));
|
|
||||||
|
|
||||||
// Split thinking text into lines for better display
|
|
||||||
const thinkingLines = event.text.split("\n");
|
|
||||||
for (const line of thinkingLines) {
|
|
||||||
thinkingContainer.addChild(new TextComponent(chalk.dim(line)));
|
|
||||||
}
|
|
||||||
thinkingContainer.addChild(new WhitespaceComponent(1));
|
|
||||||
this.chatContainer.addChild(thinkingContainer);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
case "tool_call":
|
|
||||||
this.toolCallCount++;
|
|
||||||
this.cumulativeToolCallCount++;
|
|
||||||
this.updateTokenDisplay();
|
|
||||||
this.chatContainer.addChild(new TextComponent(chalk.yellow(`[tool] ${event.name}(${event.args})`)));
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "tool_result": {
|
|
||||||
// Show tool result with truncation
|
|
||||||
const lines = event.result.split("\n");
|
|
||||||
const maxLines = 10;
|
|
||||||
const truncated = lines.length > maxLines;
|
|
||||||
const toShow = truncated ? lines.slice(0, maxLines) : lines;
|
|
||||||
|
|
||||||
const resultContainer = new Container();
|
|
||||||
for (const line of toShow) {
|
|
||||||
resultContainer.addChild(new TextComponent(event.isError ? chalk.red(line) : chalk.gray(line)));
|
|
||||||
}
|
|
||||||
|
|
||||||
if (truncated) {
|
|
||||||
resultContainer.addChild(new TextComponent(chalk.dim(`... (${lines.length - maxLines} more lines)`)));
|
|
||||||
}
|
|
||||||
resultContainer.addChild(new WhitespaceComponent(1));
|
|
||||||
this.chatContainer.addChild(resultContainer);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
case "assistant_message":
|
|
||||||
// Stop loading animation when assistant responds
|
|
||||||
if (this.currentLoadingAnimation) {
|
|
||||||
this.currentLoadingAnimation.stop();
|
|
||||||
this.currentLoadingAnimation = null;
|
|
||||||
this.statusContainer.clear();
|
|
||||||
}
|
|
||||||
// Re-enable editor submission
|
|
||||||
this.editor.disableSubmit = false;
|
|
||||||
// Use MarkdownComponent for rich formatting
|
|
||||||
this.chatContainer.addChild(new MarkdownComponent(event.text));
|
|
||||||
this.chatContainer.addChild(new WhitespaceComponent(1));
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "error":
|
|
||||||
// Stop loading animation on error
|
|
||||||
if (this.currentLoadingAnimation) {
|
|
||||||
this.currentLoadingAnimation.stop();
|
|
||||||
this.currentLoadingAnimation = null;
|
|
||||||
this.statusContainer.clear();
|
|
||||||
}
|
|
||||||
// Re-enable editor submission
|
|
||||||
this.editor.disableSubmit = false;
|
|
||||||
this.chatContainer.addChild(new TextComponent(chalk.red(`[error] ${event.message}`), { bottom: 1 }));
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "user_message":
|
|
||||||
// Render user message
|
|
||||||
this.chatContainer.addChild(new TextComponent(chalk.green("[user]")));
|
|
||||||
this.chatContainer.addChild(new TextComponent(event.text, { bottom: 1 }));
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "token_usage":
|
|
||||||
// Store the latest token counts (not cumulative since prompt includes full context)
|
|
||||||
this.lastInputTokens = event.inputTokens;
|
|
||||||
this.lastOutputTokens = event.outputTokens;
|
|
||||||
this.lastCacheReadTokens = event.cacheReadTokens;
|
|
||||||
this.lastCacheWriteTokens = event.cacheWriteTokens;
|
|
||||||
this.lastReasoningTokens = event.reasoningTokens;
|
|
||||||
|
|
||||||
// Accumulate cumulative totals
|
|
||||||
this.cumulativeInputTokens += event.inputTokens;
|
|
||||||
this.cumulativeOutputTokens += event.outputTokens;
|
|
||||||
this.cumulativeCacheReadTokens += event.cacheReadTokens;
|
|
||||||
this.cumulativeCacheWriteTokens += event.cacheWriteTokens;
|
|
||||||
this.cumulativeReasoningTokens += event.reasoningTokens;
|
|
||||||
|
|
||||||
this.updateTokenDisplay();
|
|
||||||
break;
|
|
||||||
|
|
||||||
case "interrupted":
|
|
||||||
// Stop the loading animation
|
|
||||||
if (this.currentLoadingAnimation) {
|
|
||||||
this.currentLoadingAnimation.stop();
|
|
||||||
this.currentLoadingAnimation = null;
|
|
||||||
this.statusContainer.clear();
|
|
||||||
}
|
|
||||||
// Show interrupted message
|
|
||||||
this.chatContainer.addChild(new TextComponent(chalk.red("[Interrupted by user]"), { bottom: 1 }));
|
|
||||||
// Re-enable editor submission
|
|
||||||
this.editor.disableSubmit = false;
|
|
||||||
// Explicitly request render to ensure message is displayed
|
|
||||||
this.ui.requestRender();
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
this.ui.requestRender();
|
|
||||||
}
|
|
||||||
|
|
||||||
private updateTokenDisplay(): void {
|
|
||||||
// Clear and update token display
|
|
||||||
this.tokenContainer.clear();
|
|
||||||
|
|
||||||
// Build token display text
|
|
||||||
let tokenText = chalk.dim(
|
|
||||||
`↑ ${this.lastInputTokens.toLocaleString()} ↓ ${this.lastOutputTokens.toLocaleString()}`,
|
|
||||||
);
|
|
||||||
|
|
||||||
// Add reasoning tokens if present
|
|
||||||
if (this.lastReasoningTokens > 0) {
|
|
||||||
tokenText += chalk.dim(` ⚡ ${this.lastReasoningTokens.toLocaleString()}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add cache info if available
|
|
||||||
if (this.lastCacheReadTokens > 0 || this.lastCacheWriteTokens > 0) {
|
|
||||||
const cacheText: string[] = [];
|
|
||||||
if (this.lastCacheReadTokens > 0) {
|
|
||||||
cacheText.push(` cache read: ${this.lastCacheReadTokens.toLocaleString()}`);
|
|
||||||
}
|
|
||||||
if (this.lastCacheWriteTokens > 0) {
|
|
||||||
cacheText.push(` cache write: ${this.lastCacheWriteTokens.toLocaleString()}`);
|
|
||||||
}
|
|
||||||
tokenText += chalk.dim(` (${cacheText.join(" ")})`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add tool call count
|
|
||||||
if (this.toolCallCount > 0) {
|
|
||||||
tokenText += chalk.dim(` ⚒ ${this.toolCallCount}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
this.tokenStatusComponent = new TextComponent(tokenText);
|
|
||||||
this.tokenContainer.addChild(this.tokenStatusComponent);
|
|
||||||
}
|
|
||||||
|
|
||||||
async getUserInput(): Promise<string> {
|
|
||||||
return new Promise((resolve) => {
|
|
||||||
this.onInputCallback = (text: string) => {
|
|
||||||
this.onInputCallback = undefined; // Clear callback
|
|
||||||
resolve(text);
|
|
||||||
};
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
setInterruptCallback(callback: () => void): void {
|
|
||||||
this.onInterruptCallback = callback;
|
|
||||||
}
|
|
||||||
|
|
||||||
clearEditor(): void {
|
|
||||||
this.editor.setText("");
|
|
||||||
|
|
||||||
// Show hint in status container
|
|
||||||
this.statusContainer.clear();
|
|
||||||
const hint = new TextComponent(chalk.dim("Press Ctrl+C again to exit"));
|
|
||||||
this.statusContainer.addChild(hint);
|
|
||||||
this.ui.requestRender();
|
|
||||||
|
|
||||||
// Clear the hint after 500ms
|
|
||||||
setTimeout(() => {
|
|
||||||
this.statusContainer.clear();
|
|
||||||
this.ui.requestRender();
|
|
||||||
}, 500);
|
|
||||||
}
|
|
||||||
|
|
||||||
renderAssistantLabel(): void {
|
|
||||||
// Just render the assistant label without starting animations
|
|
||||||
// Used for restored session history
|
|
||||||
this.chatContainer.addChild(new TextComponent(chalk.hex("#FFA500")("[assistant]")));
|
|
||||||
this.ui.requestRender();
|
|
||||||
}
|
|
||||||
|
|
||||||
private showTokenUsage(): void {
|
|
||||||
let tokenText = chalk.dim(
|
|
||||||
`Total usage\n input: ${this.cumulativeInputTokens.toLocaleString()}\n output: ${this.cumulativeOutputTokens.toLocaleString()}`,
|
|
||||||
);
|
|
||||||
|
|
||||||
if (this.cumulativeReasoningTokens > 0) {
|
|
||||||
tokenText += chalk.dim(`\n reasoning: ${this.cumulativeReasoningTokens.toLocaleString()}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (this.cumulativeCacheReadTokens > 0 || this.cumulativeCacheWriteTokens > 0) {
|
|
||||||
const cacheText: string[] = [];
|
|
||||||
if (this.cumulativeCacheReadTokens > 0) {
|
|
||||||
cacheText.push(`\n cache read: ${this.cumulativeCacheReadTokens.toLocaleString()}`);
|
|
||||||
}
|
|
||||||
if (this.cumulativeCacheWriteTokens > 0) {
|
|
||||||
cacheText.push(`\n cache right: ${this.cumulativeCacheWriteTokens.toLocaleString()}`);
|
|
||||||
}
|
|
||||||
tokenText += chalk.dim(` ${cacheText.join(" ")}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (this.cumulativeToolCallCount > 0) {
|
|
||||||
tokenText += chalk.dim(`\n tool calls: ${this.cumulativeToolCallCount}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
const tokenSummary = new TextComponent(chalk.italic(tokenText), { bottom: 1 });
|
|
||||||
this.chatContainer.addChild(tokenSummary);
|
|
||||||
this.ui.requestRender();
|
|
||||||
}
|
|
||||||
|
|
||||||
stop(): void {
|
|
||||||
if (this.currentLoadingAnimation) {
|
|
||||||
this.currentLoadingAnimation.stop();
|
|
||||||
this.currentLoadingAnimation = null;
|
|
||||||
}
|
|
||||||
if (this.isInitialized) {
|
|
||||||
this.ui.stop();
|
|
||||||
this.isInitialized = false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,187 +0,0 @@
|
||||||
import { randomBytes } from "crypto";
|
|
||||||
import { appendFileSync, existsSync, mkdirSync, readdirSync, readFileSync, statSync } from "fs";
|
|
||||||
import { homedir } from "os";
|
|
||||||
import { join, resolve } from "path";
|
|
||||||
import type { AgentConfig, AgentEvent, AgentEventReceiver } from "./agent.js";
|
|
||||||
|
|
||||||
// Simple UUID v4 generator
|
|
||||||
function uuidv4(): string {
|
|
||||||
const bytes = randomBytes(16);
|
|
||||||
bytes[6] = (bytes[6] & 0x0f) | 0x40; // Version 4
|
|
||||||
bytes[8] = (bytes[8] & 0x3f) | 0x80; // Variant 10
|
|
||||||
const hex = bytes.toString("hex");
|
|
||||||
return `${hex.slice(0, 8)}-${hex.slice(8, 12)}-${hex.slice(12, 16)}-${hex.slice(16, 20)}-${hex.slice(20, 32)}`;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface SessionHeader {
|
|
||||||
type: "session";
|
|
||||||
id: string;
|
|
||||||
timestamp: string;
|
|
||||||
cwd: string;
|
|
||||||
config: AgentConfig;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface SessionEvent {
|
|
||||||
type: "event";
|
|
||||||
timestamp: string;
|
|
||||||
event: AgentEvent;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface SessionData {
|
|
||||||
config: AgentConfig;
|
|
||||||
events: SessionEvent[];
|
|
||||||
totalUsage: Extract<AgentEvent, { type: "token_usage" }>;
|
|
||||||
}
|
|
||||||
|
|
||||||
export class SessionManager implements AgentEventReceiver {
|
|
||||||
private sessionId!: string;
|
|
||||||
private sessionFile!: string;
|
|
||||||
private sessionDir: string;
|
|
||||||
|
|
||||||
constructor(continueSession: boolean = false) {
|
|
||||||
this.sessionDir = this.getSessionDirectory();
|
|
||||||
|
|
||||||
if (continueSession) {
|
|
||||||
const mostRecent = this.findMostRecentlyModifiedSession();
|
|
||||||
if (mostRecent) {
|
|
||||||
this.sessionFile = mostRecent;
|
|
||||||
// Load session ID from file
|
|
||||||
this.loadSessionId();
|
|
||||||
} else {
|
|
||||||
// No existing session, create new
|
|
||||||
this.initNewSession();
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
this.initNewSession();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private getSessionDirectory(): string {
|
|
||||||
const cwd = process.cwd();
|
|
||||||
const safePath = "--" + cwd.replace(/^\//, "").replace(/\//g, "-") + "--";
|
|
||||||
|
|
||||||
const piConfigDir = resolve(process.env.PI_CONFIG_DIR || join(homedir(), ".pi"));
|
|
||||||
const sessionDir = join(piConfigDir, "sessions", safePath);
|
|
||||||
if (!existsSync(sessionDir)) {
|
|
||||||
mkdirSync(sessionDir, { recursive: true });
|
|
||||||
}
|
|
||||||
return sessionDir;
|
|
||||||
}
|
|
||||||
|
|
||||||
private initNewSession(): void {
|
|
||||||
this.sessionId = uuidv4();
|
|
||||||
const timestamp = new Date().toISOString().replace(/[:.]/g, "-");
|
|
||||||
this.sessionFile = join(this.sessionDir, `${timestamp}_${this.sessionId}.jsonl`);
|
|
||||||
}
|
|
||||||
|
|
||||||
private findMostRecentlyModifiedSession(): string | null {
|
|
||||||
try {
|
|
||||||
const files = readdirSync(this.sessionDir)
|
|
||||||
.filter((f) => f.endsWith(".jsonl"))
|
|
||||||
.map((f) => ({
|
|
||||||
name: f,
|
|
||||||
path: join(this.sessionDir, f),
|
|
||||||
mtime: statSync(join(this.sessionDir, f)).mtime,
|
|
||||||
}))
|
|
||||||
.sort((a, b) => b.mtime.getTime() - a.mtime.getTime());
|
|
||||||
|
|
||||||
return files[0]?.path || null;
|
|
||||||
} catch {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private loadSessionId(): void {
|
|
||||||
if (!existsSync(this.sessionFile)) return;
|
|
||||||
|
|
||||||
const lines = readFileSync(this.sessionFile, "utf8").trim().split("\n");
|
|
||||||
for (const line of lines) {
|
|
||||||
try {
|
|
||||||
const entry = JSON.parse(line);
|
|
||||||
if (entry.type === "session") {
|
|
||||||
this.sessionId = entry.id;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
} catch {
|
|
||||||
// Skip malformed lines
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// If no session entry found, create new ID
|
|
||||||
this.sessionId = uuidv4();
|
|
||||||
}
|
|
||||||
|
|
||||||
startSession(config: AgentConfig): void {
|
|
||||||
const entry: SessionHeader = {
|
|
||||||
type: "session",
|
|
||||||
id: this.sessionId,
|
|
||||||
timestamp: new Date().toISOString(),
|
|
||||||
cwd: process.cwd(),
|
|
||||||
config,
|
|
||||||
};
|
|
||||||
appendFileSync(this.sessionFile, JSON.stringify(entry) + "\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
async on(event: AgentEvent): Promise<void> {
|
|
||||||
const entry: SessionEvent = {
|
|
||||||
type: "event",
|
|
||||||
timestamp: new Date().toISOString(),
|
|
||||||
event: event,
|
|
||||||
};
|
|
||||||
appendFileSync(this.sessionFile, JSON.stringify(entry) + "\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
getSessionData(): SessionData | null {
|
|
||||||
if (!existsSync(this.sessionFile)) return null;
|
|
||||||
|
|
||||||
let config: AgentConfig | null = null;
|
|
||||||
const events: SessionEvent[] = [];
|
|
||||||
let totalUsage: Extract<AgentEvent, { type: "token_usage" }> = {
|
|
||||||
type: "token_usage",
|
|
||||||
inputTokens: 0,
|
|
||||||
outputTokens: 0,
|
|
||||||
totalTokens: 0,
|
|
||||||
cacheReadTokens: 0,
|
|
||||||
cacheWriteTokens: 0,
|
|
||||||
reasoningTokens: 0,
|
|
||||||
};
|
|
||||||
|
|
||||||
const lines = readFileSync(this.sessionFile, "utf8").trim().split("\n");
|
|
||||||
for (const line of lines) {
|
|
||||||
try {
|
|
||||||
const entry = JSON.parse(line);
|
|
||||||
if (entry.type === "session") {
|
|
||||||
config = entry.config;
|
|
||||||
this.sessionId = entry.id;
|
|
||||||
} else if (entry.type === "event") {
|
|
||||||
const eventEntry: SessionEvent = entry as SessionEvent;
|
|
||||||
events.push(eventEntry);
|
|
||||||
if (eventEntry.event.type === "token_usage") {
|
|
||||||
const usage = entry.event as Extract<AgentEvent, { type: "token_usage" }>;
|
|
||||||
if (!totalUsage) {
|
|
||||||
totalUsage = { ...usage };
|
|
||||||
} else {
|
|
||||||
totalUsage.inputTokens += usage.inputTokens;
|
|
||||||
totalUsage.outputTokens += usage.outputTokens;
|
|
||||||
totalUsage.totalTokens += usage.totalTokens;
|
|
||||||
totalUsage.cacheReadTokens += usage.cacheReadTokens;
|
|
||||||
totalUsage.cacheWriteTokens += usage.cacheWriteTokens;
|
|
||||||
totalUsage.reasoningTokens += usage.reasoningTokens;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} catch {
|
|
||||||
// Skip malformed lines
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return config ? { config, events, totalUsage } : null;
|
|
||||||
}
|
|
||||||
|
|
||||||
getSessionId(): string {
|
|
||||||
return this.sessionId;
|
|
||||||
}
|
|
||||||
|
|
||||||
getSessionFile(): string {
|
|
||||||
return this.sessionFile;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,264 +0,0 @@
|
||||||
import { spawn } from "node:child_process";
|
|
||||||
import { closeSync, existsSync, openSync, readdirSync, readFileSync, readSync, statSync } from "node:fs";
|
|
||||||
import { resolve } from "node:path";
|
|
||||||
import { glob } from "glob";
|
|
||||||
import type { ChatCompletionTool } from "openai/resources";
|
|
||||||
|
|
||||||
// For GPT-OSS models via responses API
|
|
||||||
export const toolsForResponses = [
|
|
||||||
{
|
|
||||||
type: "function" as const,
|
|
||||||
name: "read",
|
|
||||||
description: "Read contents of a file",
|
|
||||||
parameters: {
|
|
||||||
type: "object",
|
|
||||||
properties: {
|
|
||||||
path: {
|
|
||||||
type: "string",
|
|
||||||
description: "Path to the file to read",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
required: ["path"],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
type: "function" as const,
|
|
||||||
name: "list",
|
|
||||||
description: "List contents of a directory",
|
|
||||||
parameters: {
|
|
||||||
type: "object",
|
|
||||||
properties: {
|
|
||||||
path: {
|
|
||||||
type: "string",
|
|
||||||
description: "Path to the directory (default: current directory)",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
type: "function" as const,
|
|
||||||
name: "bash",
|
|
||||||
description: "Execute a command in Bash",
|
|
||||||
parameters: {
|
|
||||||
type: "object",
|
|
||||||
properties: {
|
|
||||||
command: {
|
|
||||||
type: "string",
|
|
||||||
description: "Command to execute",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
required: ["command"],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
type: "function" as const,
|
|
||||||
name: "glob",
|
|
||||||
description: "Find files matching a glob pattern",
|
|
||||||
parameters: {
|
|
||||||
type: "object",
|
|
||||||
properties: {
|
|
||||||
pattern: {
|
|
||||||
type: "string",
|
|
||||||
description: "Glob pattern to match files (e.g., '**/*.ts', 'src/**/*.json')",
|
|
||||||
},
|
|
||||||
path: {
|
|
||||||
type: "string",
|
|
||||||
description: "Directory to search in (default: current directory)",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
required: ["pattern"],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
type: "function" as const,
|
|
||||||
name: "rg",
|
|
||||||
description: "Search using ripgrep.",
|
|
||||||
parameters: {
|
|
||||||
type: "object",
|
|
||||||
properties: {
|
|
||||||
args: {
|
|
||||||
type: "string",
|
|
||||||
description:
|
|
||||||
'Arguments to pass directly to ripgrep. Examples: "-l prompt" or "-i TODO" or "--type ts className" or "functionName src/". Never add quotes around the search pattern.',
|
|
||||||
},
|
|
||||||
},
|
|
||||||
required: ["args"],
|
|
||||||
},
|
|
||||||
},
|
|
||||||
];
|
|
||||||
|
|
||||||
// For standard chat API (OpenAI format)
|
|
||||||
export const toolsForChat: ChatCompletionTool[] = toolsForResponses.map((tool) => ({
|
|
||||||
type: "function" as const,
|
|
||||||
function: {
|
|
||||||
name: tool.name,
|
|
||||||
description: tool.description,
|
|
||||||
parameters: tool.parameters,
|
|
||||||
},
|
|
||||||
}));
|
|
||||||
|
|
||||||
// Helper to execute commands with abort support
|
|
||||||
async function execWithAbort(command: string, signal?: AbortSignal): Promise<string> {
|
|
||||||
return new Promise((resolve, reject) => {
|
|
||||||
const child = spawn(command, {
|
|
||||||
shell: true,
|
|
||||||
signal,
|
|
||||||
});
|
|
||||||
|
|
||||||
let stdout = "";
|
|
||||||
let stderr = "";
|
|
||||||
const MAX_OUTPUT_SIZE = 1024 * 1024; // 1MB limit
|
|
||||||
let outputTruncated = false;
|
|
||||||
|
|
||||||
child.stdout?.on("data", (data) => {
|
|
||||||
const chunk = data.toString();
|
|
||||||
if (stdout.length + chunk.length > MAX_OUTPUT_SIZE) {
|
|
||||||
if (!outputTruncated) {
|
|
||||||
stdout += "\n... [Output truncated - exceeded 1MB limit] ...";
|
|
||||||
outputTruncated = true;
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
stdout += chunk;
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
child.stderr?.on("data", (data) => {
|
|
||||||
const chunk = data.toString();
|
|
||||||
if (stderr.length + chunk.length > MAX_OUTPUT_SIZE) {
|
|
||||||
if (!outputTruncated) {
|
|
||||||
stderr += "\n... [Output truncated - exceeded 1MB limit] ...";
|
|
||||||
outputTruncated = true;
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
stderr += chunk;
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
child.on("error", (error) => {
|
|
||||||
reject(error);
|
|
||||||
});
|
|
||||||
|
|
||||||
child.on("close", (code) => {
|
|
||||||
if (signal?.aborted) {
|
|
||||||
reject(new Error("Interrupted"));
|
|
||||||
} else if (code !== 0 && code !== null) {
|
|
||||||
// For some commands like ripgrep, exit code 1 is normal (no matches)
|
|
||||||
if (code === 1 && command.includes("rg")) {
|
|
||||||
resolve(""); // No matches for ripgrep
|
|
||||||
} else if (stderr && !stdout) {
|
|
||||||
reject(new Error(stderr));
|
|
||||||
} else {
|
|
||||||
resolve(stdout || "");
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
resolve(stdout || stderr || "");
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Kill the process if signal is aborted
|
|
||||||
if (signal) {
|
|
||||||
signal.addEventListener(
|
|
||||||
"abort",
|
|
||||||
() => {
|
|
||||||
child.kill("SIGTERM");
|
|
||||||
},
|
|
||||||
{ once: true },
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
export async function executeTool(name: string, args: string, signal?: AbortSignal): Promise<string> {
|
|
||||||
const parsed = JSON.parse(args);
|
|
||||||
|
|
||||||
switch (name) {
|
|
||||||
case "read": {
|
|
||||||
const path = parsed.path;
|
|
||||||
if (!path) return "Error: path parameter is required";
|
|
||||||
const file = resolve(path);
|
|
||||||
if (!existsSync(file)) return `File not found: ${file}`;
|
|
||||||
|
|
||||||
// Check file size before reading
|
|
||||||
const stats = statSync(file);
|
|
||||||
const MAX_FILE_SIZE = 1024 * 1024; // 1MB limit
|
|
||||||
if (stats.size > MAX_FILE_SIZE) {
|
|
||||||
// Read only the first 1MB
|
|
||||||
const fd = openSync(file, "r");
|
|
||||||
const buffer = Buffer.alloc(MAX_FILE_SIZE);
|
|
||||||
readSync(fd, buffer, 0, MAX_FILE_SIZE, 0);
|
|
||||||
closeSync(fd);
|
|
||||||
return buffer.toString("utf8") + "\n\n... [File truncated - exceeded 1MB limit] ...";
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = readFileSync(file, "utf8");
|
|
||||||
return data;
|
|
||||||
}
|
|
||||||
|
|
||||||
case "list": {
|
|
||||||
const path = parsed.path || ".";
|
|
||||||
const dir = resolve(path);
|
|
||||||
if (!existsSync(dir)) return `Directory not found: ${dir}`;
|
|
||||||
const entries = readdirSync(dir, { withFileTypes: true });
|
|
||||||
return entries.map((entry) => (entry.isDirectory() ? entry.name + "/" : entry.name)).join("\n");
|
|
||||||
}
|
|
||||||
|
|
||||||
case "bash": {
|
|
||||||
const command = parsed.command;
|
|
||||||
if (!command) return "Error: command parameter is required";
|
|
||||||
try {
|
|
||||||
const output = await execWithAbort(command, signal);
|
|
||||||
return output || "Command executed successfully";
|
|
||||||
} catch (e: any) {
|
|
||||||
if (e.message === "Interrupted") {
|
|
||||||
throw e; // Re-throw interruption
|
|
||||||
}
|
|
||||||
throw new Error(`Command failed: ${e.message}`);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
case "glob": {
|
|
||||||
const pattern = parsed.pattern;
|
|
||||||
if (!pattern) return "Error: pattern parameter is required";
|
|
||||||
const searchPath = parsed.path || process.cwd();
|
|
||||||
|
|
||||||
try {
|
|
||||||
const matches = await glob(pattern, {
|
|
||||||
cwd: searchPath,
|
|
||||||
dot: true,
|
|
||||||
nodir: false,
|
|
||||||
mark: true, // Add / to directories
|
|
||||||
});
|
|
||||||
|
|
||||||
if (matches.length === 0) {
|
|
||||||
return "No files found matching the pattern";
|
|
||||||
}
|
|
||||||
|
|
||||||
// Sort by modification time (most recent first) if possible
|
|
||||||
return matches.sort().join("\n");
|
|
||||||
} catch (e: any) {
|
|
||||||
return `Glob error: ${e.message}`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
case "rg": {
|
|
||||||
const args = parsed.args;
|
|
||||||
if (!args) return "Error: args parameter is required";
|
|
||||||
|
|
||||||
// Force ripgrep to never read from stdin by redirecting stdin from /dev/null
|
|
||||||
const cmd = `rg ${args} < /dev/null`;
|
|
||||||
|
|
||||||
try {
|
|
||||||
const output = await execWithAbort(cmd, signal);
|
|
||||||
return output.trim() || "No matches found";
|
|
||||||
} catch (e: any) {
|
|
||||||
if (e.message === "Interrupted") {
|
|
||||||
throw e; // Re-throw interruption
|
|
||||||
}
|
|
||||||
return `ripgrep error: ${e.message}`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
default:
|
|
||||||
return `Unknown tool: ${name}`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,9 +0,0 @@
|
||||||
{
|
|
||||||
"extends": "../../tsconfig.base.json",
|
|
||||||
"compilerOptions": {
|
|
||||||
"outDir": "./dist",
|
|
||||||
"rootDir": "./src"
|
|
||||||
},
|
|
||||||
"include": ["src/**/*"],
|
|
||||||
"exclude": ["node_modules", "dist"]
|
|
||||||
}
|
|
||||||
|
|
@ -1,4 +1,3 @@
|
||||||
import { main as agentMain } from "@mariozechner/pi-agent-old";
|
|
||||||
import chalk from "chalk";
|
import chalk from "chalk";
|
||||||
import { getActivePod, loadConfig } from "../config.js";
|
import { getActivePod, loadConfig } from "../config.js";
|
||||||
|
|
||||||
|
|
@ -77,7 +76,7 @@ Current working directory: ${process.cwd()}`;
|
||||||
|
|
||||||
// Call agent main function directly
|
// Call agent main function directly
|
||||||
try {
|
try {
|
||||||
await agentMain(args);
|
throw new Error("Not implemented");
|
||||||
} catch (err: any) {
|
} catch (err: any) {
|
||||||
console.error(chalk.red(`Agent error: ${err.message}`));
|
console.error(chalk.red(`Agent error: ${err.message}`));
|
||||||
process.exit(1);
|
process.exit(1);
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue