mirror of
https://github.com/getcompanion-ai/co-mono.git
synced 2026-04-15 07:04:45 +00:00
Agent package + coding agent WIP, refactored web-ui prompts
This commit is contained in:
parent
4e7a340460
commit
ffc9be8867
58 changed files with 5138 additions and 2206 deletions
675
docs/agent.md
675
docs/agent.md
|
|
@ -1,15 +1,23 @@
|
|||
# Coding Agent Architecture
|
||||
# Agent Architecture
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document proposes extracting the agent infrastructure from `@mariozechner/pi-web-ui` and `@mariozechner/pi-agent` into a new headless coding agent package that can be reused across multiple UI implementations (TUI, VS Code extension, web interface).
|
||||
This document proposes extracting the agent infrastructure from `@mariozechner/pi-web-ui` into two new packages:
|
||||
|
||||
1. **`@mariozechner/agent`** - General-purpose agent package with transport abstraction, state management, and attachment support
|
||||
2. **`@mariozechner/coding-agent`** - Specialized coding agent built on the general agent, with file manipulation tools and session management
|
||||
|
||||
The new architecture will provide:
|
||||
- **Headless agent core** with file manipulation tools (read, bash, edit, write)
|
||||
- **Session management** for conversation persistence and resume capability
|
||||
- **General agent core** with transport abstraction (ProviderTransport, AppTransport)
|
||||
- **Reactive state management** with subscribe/emit pattern
|
||||
- **Attachment support** (type definitions only - processing stays in consumers)
|
||||
- **Message transformation** pipeline for filtering and adapting messages
|
||||
- **Message queueing** for out-of-band message injection
|
||||
- **Full abort support** throughout the execution pipeline
|
||||
- **Event-driven API** for flexible UI integration
|
||||
- **Clean separation** between agent logic and presentation layer
|
||||
- **Coding-specific tools** (read, bash, edit, write) in a specialized package
|
||||
- **Session management** for conversation persistence and resume capability
|
||||
|
||||
## Current Architecture Analysis
|
||||
|
||||
|
|
@ -18,7 +26,7 @@ The new architecture will provide:
|
|||
```
|
||||
pi-mono/
|
||||
├── packages/ai/ # Core AI streaming (GOOD - keep as-is)
|
||||
├── packages/web-ui/ # Web UI with agent (GOOD - keep separate)
|
||||
├── packages/web-ui/ # Web UI with embedded agent (EXTRACT core agent logic)
|
||||
├── packages/agent/ # OLD - needs to be replaced
|
||||
├── packages/tui/ # Terminal UI lib (GOOD - low-level primitives)
|
||||
├── packages/proxy/ # CORS proxy (unrelated)
|
||||
|
|
@ -77,9 +85,9 @@ interface AgentToolResult<T> {
|
|||
}
|
||||
```
|
||||
|
||||
### packages/web-ui/agent - Web Agent
|
||||
### packages/web-ui/src/agent - Web Agent
|
||||
|
||||
**Status:** ✅ Good for web use cases, keep separate
|
||||
**Status:** ✅ KEEP AS-IS for now, will be replaced later after new packages are proven
|
||||
|
||||
**Architecture:**
|
||||
```typescript
|
||||
|
|
@ -94,60 +102,49 @@ class Agent {
|
|||
async prompt(input: string, attachments?: Attachment[]): Promise<void>
|
||||
abort(): void
|
||||
subscribe(fn: (e: AgentEvent) => void): () => void
|
||||
setSystemPrompt(v: string): void
|
||||
setModel(m: Model<any>): void
|
||||
setThinkingLevel(l: ThinkingLevel): void
|
||||
setTools(t: AgentTool<any>[]): void
|
||||
replaceMessages(ms: AppMessage[]): void
|
||||
appendMessage(m: AppMessage): void
|
||||
async queueMessage(m: AppMessage): Promise<void>
|
||||
clearMessages(): void
|
||||
}
|
||||
```
|
||||
|
||||
**Key Features:**
|
||||
- **Transport abstraction** (ProviderTransport for direct API, AppTransport for server-side)
|
||||
- **Attachment handling** (images, documents with text extraction)
|
||||
- **Message transformation** (app messages → LLM messages)
|
||||
- **Reactive state** (subscribe pattern for UI updates)
|
||||
- **Message queue** for injecting tool results/errors asynchronously
|
||||
**Key Features (will be basis for new `@mariozechner/agent` package):**
|
||||
- ✅ **Transport abstraction** (ProviderTransport for direct API, AppTransport for server-side proxy)
|
||||
- ✅ **Attachment type definition** (id, type, fileName, mimeType, size, content, extractedText, preview)
|
||||
- ✅ **Message transformation** pipeline (app messages → LLM messages, with filtering)
|
||||
- ✅ **Reactive state** (subscribe/emit pattern for UI updates)
|
||||
- ✅ **Message queueing** for injecting messages out-of-band during agent loop
|
||||
- ✅ **Abort support** (AbortController per prompt)
|
||||
- ✅ **State management** (systemPrompt, model, thinkingLevel, tools, messages, isStreaming, etc.)
|
||||
|
||||
**Why it's different from coding agent:**
|
||||
- Browser-specific concerns (CORS, attachments)
|
||||
- Transport layer for flexible API routing
|
||||
- Tied to web UI state management
|
||||
- Supports rich media attachments
|
||||
**Strategy:**
|
||||
1. Use this implementation as the **reference design** for `@mariozechner/agent`
|
||||
2. Create new `@mariozechner/agent` package by copying/adapting this code
|
||||
3. Keep web-ui using its own embedded agent until new package is proven stable
|
||||
4. Eventually migrate web-ui to use `@mariozechner/agent` (Phase 2 of migration)
|
||||
5. Document processing (PDF/DOCX/PPTX/Excel) stays in web-ui permanently
|
||||
|
||||
### packages/agent - OLD Implementation
|
||||
|
||||
**Status:** ⚠️ MUST BE REPLACED
|
||||
**Status:** ⚠️ REMOVE COMPLETELY
|
||||
|
||||
**Architecture:**
|
||||
```typescript
|
||||
class Agent {
|
||||
constructor(
|
||||
config: AgentConfig,
|
||||
renderer?: AgentEventReceiver,
|
||||
sessionManager?: SessionManager
|
||||
)
|
||||
**Why it should be removed:**
|
||||
1. **Tightly coupled to OpenAI SDK** - Not provider-agnostic, hardcoded to OpenAI's API
|
||||
2. **Outdated architecture** - Superseded by web-ui's better agent design
|
||||
3. **Mixed concerns** - Agent logic + tool implementations + rendering all in one package
|
||||
4. **Limited scope** - Cannot be reused across different UI implementations
|
||||
|
||||
async ask(userMessage: string): Promise<void>
|
||||
interrupt(): void
|
||||
setEvents(events: AgentEvent[]): void
|
||||
}
|
||||
```
|
||||
**What to salvage before removal:**
|
||||
1. **SessionManager** - Port to `@mariozechner/coding-agent` (JSONL-based session persistence)
|
||||
2. **Tool implementations** - Adapt read, bash, edit, write tools for coding-agent
|
||||
3. **Renderer abstractions** - Port TuiRenderer/ConsoleRenderer/JsonRenderer concepts to coding-agent-tui
|
||||
|
||||
**Problems:**
|
||||
1. **Tightly coupled to OpenAI SDK** (not provider-agnostic)
|
||||
2. **Hardcoded tools** (read, list, bash, glob, rg)
|
||||
3. **Mixed concerns** (agent logic + tool implementations in same package)
|
||||
4. **No separation** between core loop and UI rendering
|
||||
5. **Two API paths** (completions vs responses) with branching logic
|
||||
|
||||
**Good parts to preserve:**
|
||||
1. **SessionManager** - JSONL-based session persistence
|
||||
2. **Event receiver pattern** - Clean UI integration
|
||||
3. **Abort support** - Proper signal handling
|
||||
4. **Renderer abstraction** (ConsoleRenderer, TuiRenderer, JsonRenderer)
|
||||
|
||||
**Tools implemented:**
|
||||
- `read`: Read file contents (1MB limit with truncation)
|
||||
- `list`: List directory contents
|
||||
- `bash`: Execute shell command with abort support
|
||||
- `glob`: Find files matching glob pattern
|
||||
- `rg`: Run ripgrep search
|
||||
**Action:** Delete this package entirely after extracting useful components to the new packages.
|
||||
|
||||
## Proposed Architecture
|
||||
|
||||
|
|
@ -155,132 +152,440 @@ class Agent {
|
|||
|
||||
```
|
||||
pi-mono/
|
||||
├── packages/ai/ # [unchanged] Core streaming
|
||||
├── packages/coding-agent/ # [NEW] Headless coding agent
|
||||
├── packages/ai/ # [unchanged] Core streaming library
|
||||
│
|
||||
├── packages/agent/ # [NEW] General-purpose agent
|
||||
│ ├── src/
|
||||
│ │ ├── agent.ts # Main agent class
|
||||
│ │ ├── session-manager.ts # Session persistence
|
||||
│ │ ├── agent.ts # Main Agent class
|
||||
│ │ ├── types.ts # AgentState, AgentEvent, Attachment, etc.
|
||||
│ │ ├── transports/
|
||||
│ │ │ ├── types.ts # AgentTransport interface
|
||||
│ │ │ ├── ProviderTransport.ts # Direct API calls
|
||||
│ │ │ ├── AppTransport.ts # Server-side proxy
|
||||
│ │ │ ├── proxy-types.ts # Proxy event types
|
||||
│ │ │ └── index.ts # Transport exports
|
||||
│ │ └── index.ts # Public API
|
||||
│ └── package.json
|
||||
│
|
||||
├── packages/coding-agent/ # [NEW] Coding-specific agent + CLI
|
||||
│ ├── src/
|
||||
│ │ ├── coding-agent.ts # CodingAgent wrapper (uses @mariozechner/agent)
|
||||
│ │ ├── session-manager.ts # Session persistence (JSONL)
|
||||
│ │ ├── tools/
|
||||
│ │ │ ├── read-tool.ts # Read files (with pagination)
|
||||
│ │ │ ├── bash-tool.ts # Shell execution
|
||||
│ │ │ ├── edit-tool.ts # File editing (old_string → new_string)
|
||||
│ │ │ ├── write-tool.ts # File creation/replacement
|
||||
│ │ │ └── index.ts # Tool exports
|
||||
│ │ └── types.ts # Public types
|
||||
│ └── package.json
|
||||
│ │ ├── cli/
|
||||
│ │ │ ├── index.ts # CLI entry point
|
||||
│ │ │ ├── renderers/
|
||||
│ │ │ │ ├── tui-renderer.ts # Rich terminal UI
|
||||
│ │ │ │ ├── console-renderer.ts # Simple console output
|
||||
│ │ │ │ └── json-renderer.ts # JSONL output for piping
|
||||
│ │ │ └── main.ts # CLI app logic
|
||||
│ │ ├── types.ts # Public types
|
||||
│ │ └── index.ts # Public API (agent + tools)
|
||||
│ └── package.json # Exports both library + CLI binary
|
||||
│
|
||||
├── packages/coding-agent-tui/ # [NEW] Terminal interface
|
||||
├── packages/web-ui/ # [updated] Uses @mariozechner/agent
|
||||
│ ├── src/
|
||||
│ │ ├── cli.ts # CLI entry point
|
||||
│ │ ├── renderers/
|
||||
│ │ │ ├── tui-renderer.ts # Rich terminal UI
|
||||
│ │ │ ├── console-renderer.ts # Simple console output
|
||||
│ │ │ └── json-renderer.ts # JSONL output for piping
|
||||
│ │ └── main.ts # App logic
|
||||
│ └── package.json
|
||||
│ │ ├── utils/
|
||||
│ │ │ └── attachment-utils.ts # Document processing (keep here)
|
||||
│ │ └── ... # Other web UI code
|
||||
│ └── package.json # Now depends on @mariozechner/agent
|
||||
│
|
||||
├── packages/web-ui/ # [unchanged] Web UI keeps its own agent
|
||||
└── packages/tui/ # [unchanged] Low-level terminal primitives
|
||||
```
|
||||
|
||||
### Dependency Graph
|
||||
|
||||
```
|
||||
┌─────────────────────┐
|
||||
│ @mariozechner/ │
|
||||
│ pi-ai │ ← Core streaming, tool interface
|
||||
└──────────┬──────────┘
|
||||
│ depends on
|
||||
↓
|
||||
┌─────────────────────┐
|
||||
│ @mariozechner/ │
|
||||
│ coding-agent │ ← Headless agent + file tools
|
||||
└──────────┬──────────┘
|
||||
│ depends on
|
||||
↓
|
||||
┌──────────┬──────────┐
|
||||
↓ ↓ ↓
|
||||
┌────────┐ ┌───────┐ ┌────────┐
|
||||
│ TUI │ │ VSCode│ │ Web UI │
|
||||
│ Client │ │ Ext │ │ (own) │
|
||||
└────────┘ └───────┘ └────────┘
|
||||
┌─────────────────────┐
|
||||
│ @mariozechner/ │
|
||||
│ pi-ai │ ← Core streaming, tool interface
|
||||
└──────────┬──────────┘
|
||||
│ depends on
|
||||
↓
|
||||
┌─────────────────────┐
|
||||
│ @mariozechner/ │
|
||||
│ agent │ ← General agent (transports, state, attachments)
|
||||
└──────────┬──────────┘
|
||||
│ depends on
|
||||
↓
|
||||
┌───────────────┴───────────────┐
|
||||
↓ ↓
|
||||
┌─────────────────────┐ ┌─────────────────────┐
|
||||
│ @mariozechner/ │ │ @mariozechner/ │
|
||||
│ coding-agent │ │ pi-web-ui │
|
||||
│ (lib + CLI + tools) │ │ (+ doc processing) │
|
||||
└─────────────────────┘ └─────────────────────┘
|
||||
```
|
||||
|
||||
## Package: @mariozechner/agent
|
||||
|
||||
### Core Types
|
||||
|
||||
```typescript
|
||||
export interface Attachment {
|
||||
id: string;
|
||||
type: "image" | "document";
|
||||
fileName: string;
|
||||
mimeType: string;
|
||||
size: number;
|
||||
content: string; // base64 encoded (without data URL prefix)
|
||||
extractedText?: string; // For documents
|
||||
preview?: string; // base64 image preview
|
||||
}
|
||||
|
||||
export type ThinkingLevel = "off" | "minimal" | "low" | "medium" | "high";
|
||||
|
||||
// AppMessage abstraction - extends base LLM messages with app-specific features
|
||||
export type UserMessageWithAttachments = UserMessage & { attachments?: Attachment[] };
|
||||
|
||||
// Extensible interface for custom app messages (via declaration merging)
|
||||
// Apps can add their own message types:
|
||||
// declare module "@mariozechner/agent" {
|
||||
// interface CustomMessages {
|
||||
// artifact: ArtifactMessage;
|
||||
// notification: NotificationMessage;
|
||||
// }
|
||||
// }
|
||||
export interface CustomMessages {
|
||||
// Empty by default - apps extend via declaration merging
|
||||
}
|
||||
|
||||
// AppMessage: Union of LLM messages + attachments + custom messages
|
||||
export type AppMessage =
|
||||
| AssistantMessage
|
||||
| UserMessageWithAttachments
|
||||
| ToolResultMessage
|
||||
| CustomMessages[keyof CustomMessages];
|
||||
|
||||
export interface AgentState {
|
||||
systemPrompt: string;
|
||||
model: Model<any>;
|
||||
thinkingLevel: ThinkingLevel;
|
||||
tools: AgentTool<any>[];
|
||||
messages: AppMessage[]; // Can include attachments + custom message types
|
||||
isStreaming: boolean;
|
||||
streamMessage: Message | null;
|
||||
pendingToolCalls: Set<string>;
|
||||
error?: string;
|
||||
}
|
||||
|
||||
export type AgentEvent =
|
||||
| { type: "state-update"; state: AgentState }
|
||||
| { type: "started" }
|
||||
| { type: "completed" };
|
||||
```
|
||||
|
||||
### AppMessage Abstraction
|
||||
|
||||
The `AppMessage` type is a key abstraction that extends base LLM messages with app-specific features while maintaining type safety and extensibility.
|
||||
|
||||
**Key Benefits:**
|
||||
1. **Extends base messages** - Adds `attachments` field to `UserMessage` for file uploads
|
||||
2. **Type-safe extensibility** - Apps can add custom message types via declaration merging
|
||||
3. **Backward compatible** - Works seamlessly with base LLM messages from `@mariozechner/pi-ai`
|
||||
4. **Message transformation** - Filters app-specific fields before sending to LLM
|
||||
|
||||
**Usage Example (Web UI):**
|
||||
```typescript
|
||||
import type { AppMessage } from "@mariozechner/agent";
|
||||
|
||||
// Extend with custom message type for artifacts
|
||||
declare module "@mariozechner/agent" {
|
||||
interface CustomMessages {
|
||||
artifact: ArtifactMessage;
|
||||
}
|
||||
}
|
||||
|
||||
interface ArtifactMessage {
|
||||
role: "artifact";
|
||||
action: "create" | "update" | "delete";
|
||||
filename: string;
|
||||
content?: string;
|
||||
title?: string;
|
||||
timestamp: string;
|
||||
}
|
||||
|
||||
// Now AppMessage includes: AssistantMessage | UserMessageWithAttachments | ToolResultMessage | ArtifactMessage
|
||||
const messages: AppMessage[] = [
|
||||
{ role: "user", content: "Hello", attachments: [attachment] },
|
||||
{ role: "assistant", content: [{ type: "text", text: "Hi!" }], /* ... */ },
|
||||
{ role: "artifact", action: "create", filename: "test.ts", content: "...", timestamp: "..." }
|
||||
];
|
||||
```
|
||||
|
||||
**Usage Example (Coding Agent):**
|
||||
```typescript
|
||||
import type { AppMessage } from "@mariozechner/agent";
|
||||
|
||||
// Coding agent can extend with session metadata
|
||||
declare module "@mariozechner/agent" {
|
||||
interface CustomMessages {
|
||||
session_metadata: SessionMetadataMessage;
|
||||
}
|
||||
}
|
||||
|
||||
interface SessionMetadataMessage {
|
||||
role: "session_metadata";
|
||||
sessionId: string;
|
||||
timestamp: string;
|
||||
workingDirectory: string;
|
||||
}
|
||||
```
|
||||
|
||||
**Message Transformation:**
|
||||
|
||||
The `messageTransformer` function converts app messages to LLM-compatible messages, including handling attachments:
|
||||
|
||||
```typescript
|
||||
function defaultMessageTransformer(messages: AppMessage[]): Message[] {
|
||||
return messages
|
||||
.filter((m) => {
|
||||
// Only keep standard LLM message roles
|
||||
return m.role === "user" || m.role === "assistant" || m.role === "toolResult";
|
||||
})
|
||||
.map((m) => {
|
||||
if (m.role === "user") {
|
||||
const { attachments, ...baseMessage } = m as any;
|
||||
|
||||
// If no attachments, return as-is
|
||||
if (!attachments || attachments.length === 0) {
|
||||
return baseMessage as Message;
|
||||
}
|
||||
|
||||
// Convert attachments to content blocks
|
||||
const content = Array.isArray(baseMessage.content)
|
||||
? [...baseMessage.content]
|
||||
: [{ type: "text", text: baseMessage.content }];
|
||||
|
||||
for (const attachment of attachments) {
|
||||
// Add image blocks for image attachments
|
||||
if (attachment.type === "image") {
|
||||
content.push({
|
||||
type: "image",
|
||||
data: attachment.content,
|
||||
mimeType: attachment.mimeType
|
||||
});
|
||||
}
|
||||
// Add text blocks for documents with extracted text
|
||||
else if (attachment.type === "document" && attachment.extractedText) {
|
||||
content.push({
|
||||
type: "text",
|
||||
text: attachment.extractedText
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return { ...baseMessage, content } as Message;
|
||||
}
|
||||
return m as Message;
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
This ensures that:
|
||||
- Custom message types (like `artifact`, `session_metadata`) are filtered out
|
||||
- Image attachments are converted to `ImageContent` blocks
|
||||
- Document attachments with extracted text are converted to `TextContent` blocks
|
||||
- The `attachments` field itself is stripped (replaced by proper content blocks)
|
||||
- LLM receives only standard `Message` types from `@mariozechner/pi-ai`
|
||||
|
||||
### Agent Class
|
||||
|
||||
```typescript
|
||||
export interface AgentOptions {
|
||||
initialState?: Partial<AgentState>;
|
||||
transport: AgentTransport;
|
||||
// Transform app messages to LLM-compatible messages before sending
|
||||
messageTransformer?: (messages: AppMessage[]) => Message[] | Promise<Message[]>;
|
||||
}
|
||||
|
||||
export class Agent {
|
||||
constructor(opts: AgentOptions);
|
||||
|
||||
get state(): AgentState;
|
||||
subscribe(fn: (e: AgentEvent) => void): () => void;
|
||||
|
||||
// State mutators
|
||||
setSystemPrompt(v: string): void;
|
||||
setModel(m: Model<any>): void;
|
||||
setThinkingLevel(l: ThinkingLevel): void;
|
||||
setTools(t: AgentTool<any>[]): void;
|
||||
replaceMessages(ms: AppMessage[]): void;
|
||||
appendMessage(m: AppMessage): void;
|
||||
async queueMessage(m: AppMessage): Promise<void>;
|
||||
clearMessages(): void;
|
||||
|
||||
// Main prompt method
|
||||
async prompt(input: string, attachments?: Attachment[]): Promise<void>;
|
||||
|
||||
// Abort current operation
|
||||
abort(): void;
|
||||
}
|
||||
```
|
||||
|
||||
**Key Features:**
|
||||
1. **Reactive state** - Subscribe to state updates for UI binding
|
||||
2. **Transport abstraction** - Pluggable backends (direct API, proxy server, etc.)
|
||||
3. **Message transformation** - Convert app-specific messages to LLM format
|
||||
4. **Message queueing** - Inject messages during agent loop (for tool results, errors)
|
||||
5. **Attachment support** - Type-safe attachment handling (processing is external)
|
||||
6. **Abort support** - Cancel in-progress operations
|
||||
|
||||
### Transport Interface
|
||||
|
||||
```typescript
|
||||
export interface AgentRunConfig {
|
||||
systemPrompt: string;
|
||||
tools: AgentTool<any>[];
|
||||
model: Model<any>;
|
||||
reasoning?: "low" | "medium" | "high";
|
||||
getQueuedMessages?: <T>() => Promise<QueuedMessage<T>[]>;
|
||||
}
|
||||
|
||||
export interface AgentTransport {
|
||||
run(
|
||||
messages: Message[],
|
||||
userMessage: Message,
|
||||
config: AgentRunConfig,
|
||||
signal?: AbortSignal,
|
||||
): AsyncIterable<AgentEvent>;
|
||||
}
|
||||
```
|
||||
|
||||
### ProviderTransport
|
||||
|
||||
```typescript
|
||||
export class ProviderTransport implements AgentTransport {
|
||||
async *run(messages: Message[], userMessage: Message, cfg: AgentRunConfig, signal?: AbortSignal) {
|
||||
// Calls LLM providers directly using agentLoop from @mariozechner/pi-ai
|
||||
// Optionally routes through CORS proxy if configured
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### AppTransport
|
||||
|
||||
```typescript
|
||||
export class AppTransport implements AgentTransport {
|
||||
constructor(proxyUrl: string);
|
||||
|
||||
async *run(messages: Message[], userMessage: Message, cfg: AgentRunConfig, signal?: AbortSignal) {
|
||||
// Routes requests through app server with user authentication
|
||||
// Server manages API keys and usage tracking
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Package: @mariozechner/coding-agent
|
||||
|
||||
### Core Agent Class
|
||||
### CodingAgent Class
|
||||
|
||||
```typescript
|
||||
export interface CodingAgentConfig {
|
||||
export interface CodingAgentOptions {
|
||||
systemPrompt: string;
|
||||
model: Model<any>;
|
||||
reasoning?: "low" | "medium" | "high";
|
||||
apiKey: string;
|
||||
}
|
||||
|
||||
export interface CodingAgentOptions {
|
||||
config: CodingAgentConfig;
|
||||
sessionManager?: SessionManager;
|
||||
workingDirectory?: string;
|
||||
sessionManager?: SessionManager;
|
||||
}
|
||||
|
||||
export class CodingAgent {
|
||||
constructor(options: CodingAgentOptions);
|
||||
|
||||
// Access underlying agent
|
||||
get agent(): Agent;
|
||||
|
||||
// State accessors
|
||||
get state(): AgentState;
|
||||
subscribe(fn: (e: AgentEvent) => void): () => void;
|
||||
|
||||
// Send a message to the agent
|
||||
async prompt(message: string, signal?: AbortSignal): AsyncIterable<AgentEvent>;
|
||||
async prompt(message: string, attachments?: Attachment[]): Promise<void>;
|
||||
|
||||
// Restore from session events (for --continue mode)
|
||||
setMessages(messages: Message[]): void;
|
||||
// Abort current operation
|
||||
abort(): void;
|
||||
|
||||
// Get current message history
|
||||
getMessages(): Message[];
|
||||
// Message management for session restoration
|
||||
replaceMessages(messages: AppMessage[]): void;
|
||||
getMessages(): AppMessage[];
|
||||
}
|
||||
```
|
||||
|
||||
**Key design decisions:**
|
||||
1. **AsyncIterable instead of callbacks** - More flexible for consumers
|
||||
2. **Signal per prompt** - Each prompt() call accepts its own AbortSignal
|
||||
3. **No internal state management** - Consumers handle UI state
|
||||
4. **Simple message management** - Get/set for session restoration
|
||||
1. **Wraps @mariozechner/agent** - Builds on the general agent package
|
||||
2. **Pre-configured tools** - Includes read, bash, edit, write tools
|
||||
3. **Session management** - Optional JSONL-based session persistence
|
||||
4. **Working directory context** - All file operations relative to this directory
|
||||
5. **Simple API** - Hides transport complexity, uses ProviderTransport by default
|
||||
|
||||
### Usage Example (TUI)
|
||||
|
||||
```typescript
|
||||
import { CodingAgent } from "@mariozechner/coding-agent";
|
||||
import { SessionManager } from "@mariozechner/coding-agent";
|
||||
import { getModel } from "@mariozechner/pi-ai";
|
||||
|
||||
const session = new SessionManager({ continue: true });
|
||||
const agent = new CodingAgent({
|
||||
config: {
|
||||
systemPrompt: "You are a coding assistant...",
|
||||
model: getModel("openai", "gpt-4"),
|
||||
apiKey: process.env.OPENAI_API_KEY!,
|
||||
},
|
||||
sessionManager: session,
|
||||
systemPrompt: "You are a coding assistant...",
|
||||
model: getModel("openai", "gpt-4"),
|
||||
apiKey: process.env.OPENAI_API_KEY!,
|
||||
workingDirectory: process.cwd(),
|
||||
sessionManager: session,
|
||||
});
|
||||
|
||||
// Restore previous session
|
||||
if (session.hasData()) {
|
||||
agent.setMessages(session.getMessages());
|
||||
agent.replaceMessages(session.getMessages());
|
||||
}
|
||||
|
||||
// Send prompt with abort support
|
||||
const controller = new AbortController();
|
||||
for await (const event of agent.prompt("Fix the bug in server.ts", controller.signal)) {
|
||||
switch (event.type) {
|
||||
case "message_update":
|
||||
renderer.updateAssistant(event.message);
|
||||
break;
|
||||
case "tool_execution_start":
|
||||
renderer.showTool(event.toolName, event.args);
|
||||
break;
|
||||
case "tool_execution_end":
|
||||
renderer.showToolResult(event.toolName, event.result);
|
||||
break;
|
||||
// Subscribe to state changes
|
||||
agent.subscribe((event) => {
|
||||
if (event.type === "state-update") {
|
||||
renderer.render(event.state);
|
||||
} else if (event.type === "completed") {
|
||||
session.save(agent.getMessages());
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Send prompt
|
||||
await agent.prompt("Fix the bug in server.ts");
|
||||
```
|
||||
|
||||
### Usage Example (Web UI)
|
||||
|
||||
```typescript
|
||||
import { Agent, ProviderTransport, Attachment } from "@mariozechner/agent";
|
||||
import { getModel } from "@mariozechner/pi-ai";
|
||||
import { loadAttachment } from "./utils/attachment-utils"; // Web UI keeps this
|
||||
|
||||
const agent = new Agent({
|
||||
transport: new ProviderTransport(),
|
||||
initialState: {
|
||||
systemPrompt: "You are a helpful assistant...",
|
||||
model: getModel("google", "gemini-2.5-flash"),
|
||||
thinkingLevel: "low",
|
||||
tools: [],
|
||||
},
|
||||
});
|
||||
|
||||
// Subscribe to state changes for UI updates
|
||||
agent.subscribe((event) => {
|
||||
if (event.type === "state-update") {
|
||||
updateUI(event.state);
|
||||
}
|
||||
});
|
||||
|
||||
// Handle file upload and send prompt
|
||||
const file = await fileInput.files[0];
|
||||
const attachment = await loadAttachment(file); // Processes PDF/DOCX/etc
|
||||
await agent.prompt("Analyze this document", [attachment]);
|
||||
```
|
||||
|
||||
### Session Manager
|
||||
|
|
@ -300,8 +605,7 @@ export interface SessionMetadata {
|
|||
|
||||
export interface SessionData {
|
||||
metadata: SessionMetadata;
|
||||
messages: Message[]; // Conversation history
|
||||
totalUsage: TokenUsage; // Aggregated token usage
|
||||
messages: AppMessage[]; // Conversation history
|
||||
}
|
||||
|
||||
export class SessionManager {
|
||||
|
|
@ -310,8 +614,8 @@ export class SessionManager {
|
|||
// Start a new session (writes metadata)
|
||||
startSession(config: CodingAgentConfig): void;
|
||||
|
||||
// Log an event (appends to JSONL)
|
||||
appendEvent(event: AgentEvent): void;
|
||||
// Append a message to the session (appends to JSONL)
|
||||
appendMessage(message: AppMessage): void;
|
||||
|
||||
// Check if session has existing data
|
||||
hasData(): boolean;
|
||||
|
|
@ -320,7 +624,7 @@ export class SessionManager {
|
|||
getData(): SessionData | null;
|
||||
|
||||
// Get just the messages for agent restoration
|
||||
getMessages(): Message[];
|
||||
getMessages(): AppMessage[];
|
||||
|
||||
// Get session file path
|
||||
getFilePath(): string;
|
||||
|
|
@ -332,12 +636,19 @@ export class SessionManager {
|
|||
|
||||
**Session Storage Format (JSONL):**
|
||||
```jsonl
|
||||
{"type":"session","id":"uuid","timestamp":"2025-10-12T10:00:00Z","cwd":"/path","config":{...}}
|
||||
{"type":"event","timestamp":"2025-10-12T10:00:01Z","event":{"type":"turn_start"}}
|
||||
{"type":"event","timestamp":"2025-10-12T10:00:02Z","event":{"type":"message_start",...}}
|
||||
{"type":"event","timestamp":"2025-10-12T10:00:03Z","event":{"type":"message_end",...}}
|
||||
{"type":"metadata","id":"uuid","timestamp":"2025-10-12T10:00:00Z","cwd":"/path","config":{...}}
|
||||
{"type":"message","message":{"role":"user","content":"Fix the bug in server.ts"}}
|
||||
{"type":"message","message":{"role":"assistant","content":[{"type":"text","text":"I'll help..."}],...}}
|
||||
{"type":"message","message":{"role":"toolResult","toolCallId":"call_123","output":"..."}}
|
||||
{"type":"message","message":{"role":"assistant","content":[{"type":"text","text":"Fixed!"}],...}}
|
||||
```
|
||||
|
||||
**How it works:**
|
||||
- First line is session metadata (id, timestamp, working directory, config)
|
||||
- Each subsequent line is an `AppMessage` from `agent.state.messages`
|
||||
- Messages are appended as they're added to the agent state (append-only)
|
||||
- On session restore, read all message lines to reconstruct conversation history
|
||||
|
||||
**Session File Naming:**
|
||||
```
|
||||
~/.pi/sessions/--path-to-project--/
|
||||
|
|
@ -643,9 +954,11 @@ export class WriteTool implements AgentTool<typeof writeToolSchema, WriteToolDet
|
|||
}
|
||||
```
|
||||
|
||||
## Package: @mariozechner/coding-agent-tui
|
||||
## CLI Interface (included in @mariozechner/coding-agent)
|
||||
|
||||
### CLI Interface
|
||||
The coding-agent package includes both a library and a CLI interface in one package.
|
||||
|
||||
### CLI Usage
|
||||
|
||||
```bash
|
||||
# Interactive mode (default)
|
||||
|
|
@ -667,7 +980,7 @@ coding-agent --model openai/gpt-4 --api-key $KEY
|
|||
coding-agent --json < prompts.jsonl > results.jsonl
|
||||
```
|
||||
|
||||
### Arguments
|
||||
### CLI Arguments
|
||||
|
||||
```typescript
|
||||
{
|
||||
|
|
@ -818,33 +1131,58 @@ app.listen(3000);
|
|||
|
||||
## Migration Plan
|
||||
|
||||
### Phase 1: Extract Core Package
|
||||
### Phase 1: Create General Agent Package
|
||||
1. Create `packages/agent/` structure
|
||||
2. **COPY** Agent class from web-ui/src/agent/agent.ts (don't extract yet)
|
||||
3. Copy types (AgentState, AgentEvent, Attachment, DebugLogEntry, ThinkingLevel)
|
||||
4. Copy transports (types.ts, ProviderTransport.ts, AppTransport.ts, proxy-types.ts)
|
||||
5. Adapt code to work as standalone package
|
||||
6. Write unit tests for Agent class
|
||||
7. Write tests for both transports
|
||||
8. Publish `@mariozechner/agent@0.1.0`
|
||||
9. **Keep web-ui unchanged** - it continues using its embedded agent
|
||||
|
||||
### Phase 2: Create Coding Agent Package (with CLI)
|
||||
1. Create `packages/coding-agent/` structure
|
||||
2. Port SessionManager from old agent package
|
||||
3. Implement BashTool, EditTool, WriteTool
|
||||
4. Implement CodingAgent class using pi-ai/agentLoop
|
||||
5. Write tests for each tool
|
||||
6. Write integration tests
|
||||
3. Implement ReadTool, BashTool, EditTool, WriteTool
|
||||
4. Implement CodingAgent class (wraps @mariozechner/agent)
|
||||
5. Implement CLI in `src/cli/` directory:
|
||||
- CLI entry point (index.ts)
|
||||
- TuiRenderer, ConsoleRenderer, JsonRenderer
|
||||
- Argument parsing
|
||||
- Interactive and single-shot modes
|
||||
6. Write tests for tools and agent
|
||||
7. Write integration tests for CLI
|
||||
8. Publish `@mariozechner/coding-agent@0.1.0` (includes library + CLI binary)
|
||||
|
||||
### Phase 2: Build TUI
|
||||
1. Create `packages/coding-agent-tui/`
|
||||
2. Port TuiRenderer from old agent package
|
||||
3. Port ConsoleRenderer, JsonRenderer
|
||||
4. Implement CLI argument parsing
|
||||
5. Implement interactive and single-shot modes
|
||||
6. Test session resume functionality
|
||||
### Phase 3: Prove Out New Packages
|
||||
1. Use coding-agent (library + CLI) extensively
|
||||
2. Fix bugs and iterate on API design
|
||||
3. Gather feedback from real usage
|
||||
4. Ensure stability and performance
|
||||
|
||||
### Phase 3: Update Dependencies
|
||||
1. Update web-ui if needed (should be unaffected)
|
||||
2. Deprecate old agent package
|
||||
3. Update documentation
|
||||
4. Update examples
|
||||
### Phase 4: Migrate Web UI (OPTIONAL, later)
|
||||
1. Once new `@mariozechner/agent` is proven stable
|
||||
2. Update web-ui package.json to depend on `@mariozechner/agent`
|
||||
3. Remove src/agent/agent.ts, src/agent/types.ts, src/agent/transports/
|
||||
4. Keep src/utils/attachment-utils.ts (document processing)
|
||||
5. Update imports to use `@mariozechner/agent`
|
||||
6. Test that web UI still works correctly
|
||||
7. Verify document attachments (PDF, DOCX, etc.) still work
|
||||
|
||||
### Phase 4: Future Enhancements
|
||||
1. Build VS Code extension
|
||||
2. Add more tools (grep, find, etc.) as optional
|
||||
### Phase 5: Cleanup
|
||||
1. Deprecate/remove old `packages/agent/` package
|
||||
2. Update all documentation
|
||||
3. Create migration guide
|
||||
4. Add examples for all use cases
|
||||
|
||||
### Phase 6: Future Enhancements
|
||||
1. Build VS Code extension using `@mariozechner/coding-agent`
|
||||
2. Add more tools (grep, find, glob, etc.) as optional plugins
|
||||
3. Plugin system for custom tools
|
||||
4. Parallel tool execution
|
||||
5. Streaming tool output for long-running commands
|
||||
|
||||
## Open Questions & Decisions
|
||||
|
||||
|
|
@ -975,14 +1313,29 @@ Error: Cannot read binary file 'dist/app.js'. Use bash tool if you need to inspe
|
|||
|
||||
This architecture provides:
|
||||
|
||||
✅ **Headless core** - Clean separation between agent logic and UI
|
||||
✅ **Reusable** - Same agent for TUI, VS Code, web, APIs
|
||||
✅ **Composable** - Build on pi-ai primitives
|
||||
✅ **Abortable** - First-class cancellation support
|
||||
✅ **Session persistence** - Resume conversations seamlessly
|
||||
### General Agent Package (`@mariozechner/agent`)
|
||||
✅ **Transport abstraction** - Pluggable backends (ProviderTransport, AppTransport)
|
||||
✅ **Reactive state** - Subscribe/emit pattern for UI binding
|
||||
✅ **Message transformation** - Flexible pipeline for message filtering/adaptation
|
||||
✅ **Message queueing** - Out-of-band message injection during agent loop
|
||||
✅ **Attachment support** - Type-safe attachment handling (processing is external)
|
||||
✅ **Abort support** - First-class cancellation with AbortController
|
||||
✅ **Provider agnostic** - Works with any LLM provider via @mariozechner/pi-ai
|
||||
✅ **Type-safe** - Full TypeScript with proper types
|
||||
|
||||
### Coding Agent Package (`@mariozechner/coding-agent`)
|
||||
✅ **Builds on general agent** - Leverages transport abstraction and state management
|
||||
✅ **Session persistence** - JSONL-based session storage and resume
|
||||
✅ **Focused tools** - read, bash, edit, write (4 tools, no more)
|
||||
✅ **Smart pagination** - 5000-line chunks with offset/limit
|
||||
✅ **Type-safe** - Full TypeScript with schema validation
|
||||
✅ **Smart pagination** - 5000-line chunks with offset/limit for ReadTool
|
||||
✅ **Working directory context** - All tools operate relative to project root
|
||||
✅ **Simple API** - Hides complexity, easy to use
|
||||
✅ **Testable** - Pure functions, mockable dependencies
|
||||
|
||||
The key insight is to **keep web-ui's agent separate** (it has different concerns) while creating a **new focused coding agent** for file manipulation workflows that can be shared across non-web interfaces.
|
||||
### Key Architectural Insights
|
||||
1. **Extract, don't rewrite** - The web-ui agent is well-designed; extract it into a general package
|
||||
2. **Separation of concerns** - Document processing (PDF/DOCX/etc.) stays in web-ui, only type definitions move to general agent
|
||||
3. **Layered architecture** - pi-ai → agent → coding-agent → coding-agent-tui
|
||||
4. **Reusable across UIs** - Web UI and coding agent both use the same general agent package
|
||||
5. **Pluggable transports** - Easy to add new backends (local API, proxy server, etc.)
|
||||
6. **Attachment flexibility** - Type is defined centrally, processing is done by consumers
|
||||
|
|
|
|||
823
package-lock.json
generated
823
package-lock.json
generated
|
|
@ -728,6 +728,123 @@
|
|||
"hono": "^4"
|
||||
}
|
||||
},
|
||||
"node_modules/@isaacs/balanced-match": {
|
||||
"version": "4.0.1",
|
||||
"resolved": "https://registry.npmjs.org/@isaacs/balanced-match/-/balanced-match-4.0.1.tgz",
|
||||
"integrity": "sha512-yzMTt9lEb8Gv7zRioUilSglI0c0smZ9k5D65677DLWLtWJaXIS3CqcGyUFByYKlnUj6TkjLVs54fBl6+TiGQDQ==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": "20 || >=22"
|
||||
}
|
||||
},
|
||||
"node_modules/@isaacs/brace-expansion": {
|
||||
"version": "5.0.0",
|
||||
"resolved": "https://registry.npmjs.org/@isaacs/brace-expansion/-/brace-expansion-5.0.0.tgz",
|
||||
"integrity": "sha512-ZT55BDLV0yv0RBm2czMiZ+SqCGO7AvmOM3G/w2xhVPH+te0aKgFjmBvGlL1dH+ql2tgGO3MVrbb3jCKyvpgnxA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@isaacs/balanced-match": "^4.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": "20 || >=22"
|
||||
}
|
||||
},
|
||||
"node_modules/@isaacs/cliui": {
|
||||
"version": "8.0.2",
|
||||
"resolved": "https://registry.npmjs.org/@isaacs/cliui/-/cliui-8.0.2.tgz",
|
||||
"integrity": "sha512-O8jcjabXaleOG9DQ0+ARXWZBTfnP4WNAqzuiJK7ll44AmxGKv/J2M4TPjxjY3znBCfvBXFzucm1twdyFybFqEA==",
|
||||
"license": "ISC",
|
||||
"dependencies": {
|
||||
"string-width": "^5.1.2",
|
||||
"string-width-cjs": "npm:string-width@^4.2.0",
|
||||
"strip-ansi": "^7.0.1",
|
||||
"strip-ansi-cjs": "npm:strip-ansi@^6.0.1",
|
||||
"wrap-ansi": "^8.1.0",
|
||||
"wrap-ansi-cjs": "npm:wrap-ansi@^7.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
}
|
||||
},
|
||||
"node_modules/@isaacs/cliui/node_modules/ansi-regex": {
|
||||
"version": "6.2.2",
|
||||
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-6.2.2.tgz",
|
||||
"integrity": "sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/ansi-regex?sponsor=1"
|
||||
}
|
||||
},
|
||||
"node_modules/@isaacs/cliui/node_modules/ansi-styles": {
|
||||
"version": "6.2.3",
|
||||
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-6.2.3.tgz",
|
||||
"integrity": "sha512-4Dj6M28JB+oAH8kFkTLUo+a2jwOFkuqb3yucU0CANcRRUbxS0cP0nZYCGjcc3BNXwRIsUVmDGgzawme7zvJHvg==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/ansi-styles?sponsor=1"
|
||||
}
|
||||
},
|
||||
"node_modules/@isaacs/cliui/node_modules/emoji-regex": {
|
||||
"version": "9.2.2",
|
||||
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-9.2.2.tgz",
|
||||
"integrity": "sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/@isaacs/cliui/node_modules/string-width": {
|
||||
"version": "5.1.2",
|
||||
"resolved": "https://registry.npmjs.org/string-width/-/string-width-5.1.2.tgz",
|
||||
"integrity": "sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"eastasianwidth": "^0.2.0",
|
||||
"emoji-regex": "^9.2.2",
|
||||
"strip-ansi": "^7.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/@isaacs/cliui/node_modules/strip-ansi": {
|
||||
"version": "7.1.2",
|
||||
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-7.1.2.tgz",
|
||||
"integrity": "sha512-gmBGslpoQJtgnMAvOVqGZpEz9dyoKTCzy2nfz/n8aIFhN/jCE/rCmcxabB6jOOHV+0WNnylOxaxBQPSvcWklhA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ansi-regex": "^6.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/strip-ansi?sponsor=1"
|
||||
}
|
||||
},
|
||||
"node_modules/@isaacs/cliui/node_modules/wrap-ansi": {
|
||||
"version": "8.1.0",
|
||||
"resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-8.1.0.tgz",
|
||||
"integrity": "sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ansi-styles": "^6.1.0",
|
||||
"string-width": "^5.0.1",
|
||||
"strip-ansi": "^7.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/wrap-ansi?sponsor=1"
|
||||
}
|
||||
},
|
||||
"node_modules/@isaacs/fs-minipass": {
|
||||
"version": "4.0.1",
|
||||
"resolved": "https://registry.npmjs.org/@isaacs/fs-minipass/-/fs-minipass-4.0.1.tgz",
|
||||
|
|
@ -806,6 +923,10 @@
|
|||
"@lit-labs/ssr-dom-shim": "^1.4.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@mariozechner/coding-agent": {
|
||||
"resolved": "packages/coding-agent",
|
||||
"link": true
|
||||
},
|
||||
"node_modules/@mariozechner/jailjs": {
|
||||
"version": "0.1.1",
|
||||
"resolved": "https://registry.npmjs.org/@mariozechner/jailjs/-/jailjs-0.1.1.tgz",
|
||||
|
|
@ -845,6 +966,10 @@
|
|||
"resolved": "packages/agent",
|
||||
"link": true
|
||||
},
|
||||
"node_modules/@mariozechner/pi-agent-old": {
|
||||
"resolved": "packages/agent-old",
|
||||
"link": true
|
||||
},
|
||||
"node_modules/@mariozechner/pi-ai": {
|
||||
"resolved": "packages/ai",
|
||||
"link": true
|
||||
|
|
@ -1997,6 +2122,22 @@
|
|||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/@types/glob": {
|
||||
"version": "8.1.0",
|
||||
"resolved": "https://registry.npmjs.org/@types/glob/-/glob-8.1.0.tgz",
|
||||
"integrity": "sha512-IO+MJPVhoqz+28h1qLAcBEH2+xHMK6MTyHJc7MTnnYb6wsoLR29POVGJ7LycmVXIqyy/4/2ShP5sUwTXuOwb/w==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@types/minimatch": "^5.1.2",
|
||||
"@types/node": "*"
|
||||
}
|
||||
},
|
||||
"node_modules/@types/minimatch": {
|
||||
"version": "5.1.2",
|
||||
"resolved": "https://registry.npmjs.org/@types/minimatch/-/minimatch-5.1.2.tgz",
|
||||
"integrity": "sha512-K0VQKziLUWkVKiRVrx4a40iPaxTUefQmjtkQofBkYRcoaaL/8rhwDWww9qWbrgicNOgnpIsMxyNIUM4+n6dUIA==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/@types/node": {
|
||||
"version": "22.18.8",
|
||||
"resolved": "https://registry.npmjs.org/@types/node/-/node-22.18.8.tgz",
|
||||
|
|
@ -2543,6 +2684,20 @@
|
|||
"node": ">=0.8"
|
||||
}
|
||||
},
|
||||
"node_modules/cross-spawn": {
|
||||
"version": "7.0.6",
|
||||
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz",
|
||||
"integrity": "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"path-key": "^3.1.0",
|
||||
"shebang-command": "^2.0.0",
|
||||
"which": "^2.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 8"
|
||||
}
|
||||
},
|
||||
"node_modules/debug": {
|
||||
"version": "4.4.3",
|
||||
"resolved": "https://registry.npmjs.org/debug/-/debug-4.4.3.tgz",
|
||||
|
|
@ -2624,6 +2779,12 @@
|
|||
"jszip": ">=3.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/eastasianwidth": {
|
||||
"version": "0.2.0",
|
||||
"resolved": "https://registry.npmjs.org/eastasianwidth/-/eastasianwidth-0.2.0.tgz",
|
||||
"integrity": "sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/ecdsa-sig-formatter": {
|
||||
"version": "1.0.11",
|
||||
"resolved": "https://registry.npmjs.org/ecdsa-sig-formatter/-/ecdsa-sig-formatter-1.0.11.tgz",
|
||||
|
|
@ -2637,7 +2798,6 @@
|
|||
"version": "8.0.0",
|
||||
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
|
||||
"integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/end-of-stream": {
|
||||
|
|
@ -2812,6 +2972,22 @@
|
|||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/foreground-child": {
|
||||
"version": "3.3.1",
|
||||
"resolved": "https://registry.npmjs.org/foreground-child/-/foreground-child-3.3.1.tgz",
|
||||
"integrity": "sha512-gIXjKqtFuWEgzFRJA9WCQeSJLZDjgJUOMCMzxtvFq/37KojM1BFGufqsCy0r4qSQmYLsZYMeyRqzIWOMup03sw==",
|
||||
"license": "ISC",
|
||||
"dependencies": {
|
||||
"cross-spawn": "^7.0.6",
|
||||
"signal-exit": "^4.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=14"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/isaacs"
|
||||
}
|
||||
},
|
||||
"node_modules/frac": {
|
||||
"version": "1.1.2",
|
||||
"resolved": "https://registry.npmjs.org/frac/-/frac-1.1.2.tgz",
|
||||
|
|
@ -2903,6 +3079,29 @@
|
|||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/glob": {
|
||||
"version": "11.0.3",
|
||||
"resolved": "https://registry.npmjs.org/glob/-/glob-11.0.3.tgz",
|
||||
"integrity": "sha512-2Nim7dha1KVkaiF4q6Dj+ngPPMdfvLJEOpZk/jKiUAkqKebpGAWQXAq9z1xu9HKu5lWfqw/FASuccEjyznjPaA==",
|
||||
"license": "ISC",
|
||||
"dependencies": {
|
||||
"foreground-child": "^3.3.1",
|
||||
"jackspeak": "^4.1.1",
|
||||
"minimatch": "^10.0.3",
|
||||
"minipass": "^7.1.2",
|
||||
"package-json-from-dist": "^1.0.0",
|
||||
"path-scurry": "^2.0.0"
|
||||
},
|
||||
"bin": {
|
||||
"glob": "dist/esm/bin.mjs"
|
||||
},
|
||||
"engines": {
|
||||
"node": "20 || >=22"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/isaacs"
|
||||
}
|
||||
},
|
||||
"node_modules/google-auth-library": {
|
||||
"version": "9.15.1",
|
||||
"resolved": "https://registry.npmjs.org/google-auth-library/-/google-auth-library-9.15.1.tgz",
|
||||
|
|
@ -3112,6 +3311,27 @@
|
|||
"integrity": "sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/isexe": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz",
|
||||
"integrity": "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==",
|
||||
"license": "ISC"
|
||||
},
|
||||
"node_modules/jackspeak": {
|
||||
"version": "4.1.1",
|
||||
"resolved": "https://registry.npmjs.org/jackspeak/-/jackspeak-4.1.1.tgz",
|
||||
"integrity": "sha512-zptv57P3GpL+O0I7VdMJNBZCu+BPHVQUk55Ft8/QCJjTVxrnJHuVuX/0Bl2A6/+2oyR/ZMEuFKwmzqqZ/U5nPQ==",
|
||||
"license": "BlueOak-1.0.0",
|
||||
"dependencies": {
|
||||
"@isaacs/cliui": "^8.0.2"
|
||||
},
|
||||
"engines": {
|
||||
"node": "20 || >=22"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/isaacs"
|
||||
}
|
||||
},
|
||||
"node_modules/jiti": {
|
||||
"version": "2.6.1",
|
||||
"resolved": "https://registry.npmjs.org/jiti/-/jiti-2.6.1.tgz",
|
||||
|
|
@ -3509,6 +3729,15 @@
|
|||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/lru-cache": {
|
||||
"version": "11.2.2",
|
||||
"resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-11.2.2.tgz",
|
||||
"integrity": "sha512-F9ODfyqML2coTIsQpSkRHnLSZMtkU8Q+mSfcaIyKwy58u+8k5nvAYeiNhsyMARvzNcXJ9QfWVrcPsC9e9rAxtg==",
|
||||
"license": "ISC",
|
||||
"engines": {
|
||||
"node": "20 || >=22"
|
||||
}
|
||||
},
|
||||
"node_modules/lucide": {
|
||||
"version": "0.544.0",
|
||||
"resolved": "https://registry.npmjs.org/lucide/-/lucide-0.544.0.tgz",
|
||||
|
|
@ -3577,6 +3806,21 @@
|
|||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/minimatch": {
|
||||
"version": "10.0.3",
|
||||
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-10.0.3.tgz",
|
||||
"integrity": "sha512-IPZ167aShDZZUMdRk66cyQAW3qr0WzbHkPdMYa8bzZhlHhO3jALbKdxcaak7W9FfT2rZNpQuUu4Od7ILEpXSaw==",
|
||||
"license": "ISC",
|
||||
"dependencies": {
|
||||
"@isaacs/brace-expansion": "^5.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": "20 || >=22"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/isaacs"
|
||||
}
|
||||
},
|
||||
"node_modules/minimist": {
|
||||
"version": "1.2.8",
|
||||
"resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.8.tgz",
|
||||
|
|
@ -3738,6 +3982,12 @@
|
|||
}
|
||||
}
|
||||
},
|
||||
"node_modules/package-json-from-dist": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/package-json-from-dist/-/package-json-from-dist-1.0.1.tgz",
|
||||
"integrity": "sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw==",
|
||||
"license": "BlueOak-1.0.0"
|
||||
},
|
||||
"node_modules/pako": {
|
||||
"version": "1.0.11",
|
||||
"resolved": "https://registry.npmjs.org/pako/-/pako-1.0.11.tgz",
|
||||
|
|
@ -3750,6 +4000,31 @@
|
|||
"integrity": "sha512-Njv/59hHaokb/hRUjce3Hdv12wd60MtM9Z5Olmn+nehe0QDAsRtRbJPvJ0Z91TusF0SuZRIvnM+S4l6EIP8leA==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/path-key": {
|
||||
"version": "3.1.1",
|
||||
"resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz",
|
||||
"integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/path-scurry": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/path-scurry/-/path-scurry-2.0.0.tgz",
|
||||
"integrity": "sha512-ypGJsmGtdXUOeM5u93TyeIEfEhM6s+ljAhrk5vAvSx8uyY/02OvrZnA0YNGUrPXfpJMgI1ODd3nwz8Npx4O4cg==",
|
||||
"license": "BlueOak-1.0.0",
|
||||
"dependencies": {
|
||||
"lru-cache": "^11.0.0",
|
||||
"minipass": "^7.1.2"
|
||||
},
|
||||
"engines": {
|
||||
"node": "20 || >=22"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/isaacs"
|
||||
}
|
||||
},
|
||||
"node_modules/pathe": {
|
||||
"version": "2.0.3",
|
||||
"resolved": "https://registry.npmjs.org/pathe/-/pathe-2.0.3.tgz",
|
||||
|
|
@ -4023,6 +4298,27 @@
|
|||
"integrity": "sha512-MATJdZp8sLqDl/68LfQmbP8zKPLQNV6BIZoIgrscFDQ+RsvK/BxeDQOgyxKKoh0y/8h3BqVFnCqQ/gd+reiIXA==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/shebang-command": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz",
|
||||
"integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"shebang-regex": "^3.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/shebang-regex": {
|
||||
"version": "3.0.0",
|
||||
"resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz",
|
||||
"integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/shell-quote": {
|
||||
"version": "1.8.3",
|
||||
"resolved": "https://registry.npmjs.org/shell-quote/-/shell-quote-1.8.3.tgz",
|
||||
|
|
@ -4043,6 +4339,18 @@
|
|||
"dev": true,
|
||||
"license": "ISC"
|
||||
},
|
||||
"node_modules/signal-exit": {
|
||||
"version": "4.1.0",
|
||||
"resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-4.1.0.tgz",
|
||||
"integrity": "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw==",
|
||||
"license": "ISC",
|
||||
"engines": {
|
||||
"node": ">=14"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/isaacs"
|
||||
}
|
||||
},
|
||||
"node_modules/simple-concat": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/simple-concat/-/simple-concat-1.0.1.tgz",
|
||||
|
|
@ -4140,7 +4448,21 @@
|
|||
"version": "4.2.3",
|
||||
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
|
||||
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"emoji-regex": "^8.0.0",
|
||||
"is-fullwidth-code-point": "^3.0.0",
|
||||
"strip-ansi": "^6.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/string-width-cjs": {
|
||||
"name": "string-width",
|
||||
"version": "4.2.3",
|
||||
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
|
||||
"integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"emoji-regex": "^8.0.0",
|
||||
|
|
@ -4163,6 +4485,19 @@
|
|||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/strip-ansi-cjs": {
|
||||
"name": "strip-ansi",
|
||||
"version": "6.0.1",
|
||||
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
|
||||
"integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ansi-regex": "^5.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"node_modules/strip-json-comments": {
|
||||
"version": "2.0.1",
|
||||
"resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-2.0.1.tgz",
|
||||
|
|
@ -4679,6 +5014,21 @@
|
|||
"webidl-conversions": "^3.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/which": {
|
||||
"version": "2.0.2",
|
||||
"resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz",
|
||||
"integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==",
|
||||
"license": "ISC",
|
||||
"dependencies": {
|
||||
"isexe": "^2.0.0"
|
||||
},
|
||||
"bin": {
|
||||
"node-which": "bin/node-which"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 8"
|
||||
}
|
||||
},
|
||||
"node_modules/why-is-node-running": {
|
||||
"version": "2.3.0",
|
||||
"resolved": "https://registry.npmjs.org/why-is-node-running/-/why-is-node-running-2.3.0.tgz",
|
||||
|
|
@ -4732,6 +5082,24 @@
|
|||
"url": "https://github.com/chalk/wrap-ansi?sponsor=1"
|
||||
}
|
||||
},
|
||||
"node_modules/wrap-ansi-cjs": {
|
||||
"name": "wrap-ansi",
|
||||
"version": "7.0.0",
|
||||
"resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz",
|
||||
"integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ansi-styles": "^4.0.0",
|
||||
"string-width": "^4.1.0",
|
||||
"strip-ansi": "^6.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=10"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/wrap-ansi?sponsor=1"
|
||||
}
|
||||
},
|
||||
"node_modules/wrappy": {
|
||||
"version": "1.0.2",
|
||||
"resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz",
|
||||
|
|
@ -4853,6 +5221,22 @@
|
|||
"name": "@mariozechner/pi-agent",
|
||||
"version": "0.5.44",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@mariozechner/pi-ai": "^0.5.44"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^24.3.0",
|
||||
"typescript": "^5.7.3",
|
||||
"vitest": "^3.2.4"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=20.0.0"
|
||||
}
|
||||
},
|
||||
"packages/agent-old": {
|
||||
"name": "@mariozechner/pi-agent-old",
|
||||
"version": "0.5.44",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@mariozechner/pi-tui": "^0.5.44",
|
||||
"@types/glob": "^8.1.0",
|
||||
|
|
@ -4868,78 +5252,7 @@
|
|||
"node": ">=20.0.0"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/@isaacs/balanced-match": {
|
||||
"version": "4.0.1",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": "20 || >=22"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/@isaacs/brace-expansion": {
|
||||
"version": "5.0.0",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@isaacs/balanced-match": "^4.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": "20 || >=22"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/@isaacs/cliui": {
|
||||
"version": "8.0.2",
|
||||
"license": "ISC",
|
||||
"dependencies": {
|
||||
"string-width": "^5.1.2",
|
||||
"string-width-cjs": "npm:string-width@^4.2.0",
|
||||
"strip-ansi": "^7.0.1",
|
||||
"strip-ansi-cjs": "npm:strip-ansi@^6.0.1",
|
||||
"wrap-ansi": "^8.1.0",
|
||||
"wrap-ansi-cjs": "npm:wrap-ansi@^7.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/@isaacs/cliui/node_modules/ansi-regex": {
|
||||
"version": "6.2.2",
|
||||
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-6.2.2.tgz",
|
||||
"integrity": "sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/ansi-regex?sponsor=1"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/@isaacs/cliui/node_modules/strip-ansi": {
|
||||
"version": "7.1.2",
|
||||
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-7.1.2.tgz",
|
||||
"integrity": "sha512-gmBGslpoQJtgnMAvOVqGZpEz9dyoKTCzy2nfz/n8aIFhN/jCE/rCmcxabB6jOOHV+0WNnylOxaxBQPSvcWklhA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ansi-regex": "^6.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/strip-ansi?sponsor=1"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/@types/glob": {
|
||||
"version": "8.1.0",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@types/minimatch": "^5.1.2",
|
||||
"@types/node": "*"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/@types/minimatch": {
|
||||
"version": "5.1.2",
|
||||
"license": "MIT"
|
||||
},
|
||||
"packages/agent/node_modules/chalk": {
|
||||
"packages/agent-old/node_modules/chalk": {
|
||||
"version": "5.6.2",
|
||||
"resolved": "https://registry.npmjs.org/chalk/-/chalk-5.6.2.tgz",
|
||||
"integrity": "sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA==",
|
||||
|
|
@ -4951,309 +5264,23 @@
|
|||
"url": "https://github.com/chalk/chalk?sponsor=1"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/cross-spawn": {
|
||||
"version": "7.0.6",
|
||||
"packages/agent/node_modules/@types/node": {
|
||||
"version": "24.8.0",
|
||||
"resolved": "https://registry.npmjs.org/@types/node/-/node-24.8.0.tgz",
|
||||
"integrity": "sha512-5x08bUtU8hfboMTrJ7mEO4CpepS9yBwAqcL52y86SWNmbPX8LVbNs3EP4cNrIZgdjk2NAlP2ahNihozpoZIxSg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"path-key": "^3.1.0",
|
||||
"shebang-command": "^2.0.0",
|
||||
"which": "^2.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 8"
|
||||
"undici-types": "~7.14.0"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/eastasianwidth": {
|
||||
"version": "0.2.0",
|
||||
"packages/agent/node_modules/undici-types": {
|
||||
"version": "7.14.0",
|
||||
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-7.14.0.tgz",
|
||||
"integrity": "sha512-QQiYxHuyZ9gQUIrmPo3IA+hUl4KYk8uSA7cHrcKd/l3p1OTpZcM0Tbp9x7FAtXdAYhlasd60ncPpgu6ihG6TOA==",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"packages/agent/node_modules/emoji-regex": {
|
||||
"version": "9.2.2",
|
||||
"license": "MIT"
|
||||
},
|
||||
"packages/agent/node_modules/foreground-child": {
|
||||
"version": "3.3.1",
|
||||
"license": "ISC",
|
||||
"dependencies": {
|
||||
"cross-spawn": "^7.0.6",
|
||||
"signal-exit": "^4.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=14"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/isaacs"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/glob": {
|
||||
"version": "11.0.3",
|
||||
"license": "ISC",
|
||||
"dependencies": {
|
||||
"foreground-child": "^3.3.1",
|
||||
"jackspeak": "^4.1.1",
|
||||
"minimatch": "^10.0.3",
|
||||
"minipass": "^7.1.2",
|
||||
"package-json-from-dist": "^1.0.0",
|
||||
"path-scurry": "^2.0.0"
|
||||
},
|
||||
"bin": {
|
||||
"glob": "dist/esm/bin.mjs"
|
||||
},
|
||||
"engines": {
|
||||
"node": "20 || >=22"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/isaacs"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/isexe": {
|
||||
"version": "2.0.0",
|
||||
"license": "ISC"
|
||||
},
|
||||
"packages/agent/node_modules/jackspeak": {
|
||||
"version": "4.1.1",
|
||||
"license": "BlueOak-1.0.0",
|
||||
"dependencies": {
|
||||
"@isaacs/cliui": "^8.0.2"
|
||||
},
|
||||
"engines": {
|
||||
"node": "20 || >=22"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/isaacs"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/lru-cache": {
|
||||
"version": "11.1.0",
|
||||
"license": "ISC",
|
||||
"engines": {
|
||||
"node": "20 || >=22"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/minimatch": {
|
||||
"version": "10.0.3",
|
||||
"license": "ISC",
|
||||
"dependencies": {
|
||||
"@isaacs/brace-expansion": "^5.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": "20 || >=22"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/isaacs"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/package-json-from-dist": {
|
||||
"version": "1.0.1",
|
||||
"license": "BlueOak-1.0.0"
|
||||
},
|
||||
"packages/agent/node_modules/path-key": {
|
||||
"version": "3.1.1",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/path-scurry": {
|
||||
"version": "2.0.0",
|
||||
"license": "BlueOak-1.0.0",
|
||||
"dependencies": {
|
||||
"lru-cache": "^11.0.0",
|
||||
"minipass": "^7.1.2"
|
||||
},
|
||||
"engines": {
|
||||
"node": "20 || >=22"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/isaacs"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/shebang-command": {
|
||||
"version": "2.0.0",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"shebang-regex": "^3.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/shebang-regex": {
|
||||
"version": "3.0.0",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/string-width": {
|
||||
"version": "5.1.2",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"eastasianwidth": "^0.2.0",
|
||||
"emoji-regex": "^9.2.2",
|
||||
"strip-ansi": "^7.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/string-width-cjs": {
|
||||
"name": "string-width",
|
||||
"version": "4.2.3",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"emoji-regex": "^8.0.0",
|
||||
"is-fullwidth-code-point": "^3.0.0",
|
||||
"strip-ansi": "^6.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/string-width-cjs/node_modules/emoji-regex": {
|
||||
"version": "8.0.0",
|
||||
"license": "MIT"
|
||||
},
|
||||
"packages/agent/node_modules/string-width/node_modules/ansi-regex": {
|
||||
"version": "6.2.2",
|
||||
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-6.2.2.tgz",
|
||||
"integrity": "sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/ansi-regex?sponsor=1"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/string-width/node_modules/strip-ansi": {
|
||||
"version": "7.1.2",
|
||||
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-7.1.2.tgz",
|
||||
"integrity": "sha512-gmBGslpoQJtgnMAvOVqGZpEz9dyoKTCzy2nfz/n8aIFhN/jCE/rCmcxabB6jOOHV+0WNnylOxaxBQPSvcWklhA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ansi-regex": "^6.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/strip-ansi?sponsor=1"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/strip-ansi-cjs": {
|
||||
"name": "strip-ansi",
|
||||
"version": "6.0.1",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ansi-regex": "^5.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/which": {
|
||||
"version": "2.0.2",
|
||||
"license": "ISC",
|
||||
"dependencies": {
|
||||
"isexe": "^2.0.0"
|
||||
},
|
||||
"bin": {
|
||||
"node-which": "bin/node-which"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 8"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/wrap-ansi": {
|
||||
"version": "8.1.0",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ansi-styles": "^6.1.0",
|
||||
"string-width": "^5.0.1",
|
||||
"strip-ansi": "^7.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/wrap-ansi?sponsor=1"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/wrap-ansi-cjs": {
|
||||
"name": "wrap-ansi",
|
||||
"version": "7.0.0",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ansi-styles": "^4.0.0",
|
||||
"string-width": "^4.1.0",
|
||||
"strip-ansi": "^6.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=10"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/wrap-ansi?sponsor=1"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/wrap-ansi-cjs/node_modules/emoji-regex": {
|
||||
"version": "8.0.0",
|
||||
"license": "MIT"
|
||||
},
|
||||
"packages/agent/node_modules/wrap-ansi-cjs/node_modules/string-width": {
|
||||
"version": "4.2.3",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"emoji-regex": "^8.0.0",
|
||||
"is-fullwidth-code-point": "^3.0.0",
|
||||
"strip-ansi": "^6.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=8"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/wrap-ansi/node_modules/ansi-regex": {
|
||||
"version": "6.2.2",
|
||||
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-6.2.2.tgz",
|
||||
"integrity": "sha512-Bq3SmSpyFHaWjPk8If9yc6svM8c56dB5BAtW4Qbw5jHTwwXXcTLoRMkpDJp6VL0XzlWaCHTXrkFURMYmD0sLqg==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/ansi-regex?sponsor=1"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/wrap-ansi/node_modules/ansi-styles": {
|
||||
"version": "6.2.3",
|
||||
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-6.2.3.tgz",
|
||||
"integrity": "sha512-4Dj6M28JB+oAH8kFkTLUo+a2jwOFkuqb3yucU0CANcRRUbxS0cP0nZYCGjcc3BNXwRIsUVmDGgzawme7zvJHvg==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/ansi-styles?sponsor=1"
|
||||
}
|
||||
},
|
||||
"packages/agent/node_modules/wrap-ansi/node_modules/strip-ansi": {
|
||||
"version": "7.1.2",
|
||||
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-7.1.2.tgz",
|
||||
"integrity": "sha512-gmBGslpoQJtgnMAvOVqGZpEz9dyoKTCzy2nfz/n8aIFhN/jCE/rCmcxabB6jOOHV+0WNnylOxaxBQPSvcWklhA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"ansi-regex": "^6.0.1"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=12"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/strip-ansi?sponsor=1"
|
||||
}
|
||||
},
|
||||
"packages/ai": {
|
||||
"name": "@mariozechner/pi-ai",
|
||||
"version": "0.5.44",
|
||||
|
|
@ -5351,12 +5378,64 @@
|
|||
"ws": "^8.18.0"
|
||||
}
|
||||
},
|
||||
"packages/coding-agent": {
|
||||
"name": "@mariozechner/coding-agent",
|
||||
"version": "0.5.44",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@mariozechner/pi-agent": "^0.5.44",
|
||||
"@mariozechner/pi-ai": "^0.5.44",
|
||||
"@mariozechner/pi-tui": "^0.5.44",
|
||||
"chalk": "^5.5.0",
|
||||
"glob": "^11.0.3"
|
||||
},
|
||||
"bin": {
|
||||
"coding-agent": "dist/cli.js"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^24.3.0",
|
||||
"typescript": "^5.7.3",
|
||||
"vitest": "^3.2.4"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=20.0.0"
|
||||
}
|
||||
},
|
||||
"packages/coding-agent/node_modules/@types/node": {
|
||||
"version": "24.8.0",
|
||||
"resolved": "https://registry.npmjs.org/@types/node/-/node-24.8.0.tgz",
|
||||
"integrity": "sha512-5x08bUtU8hfboMTrJ7mEO4CpepS9yBwAqcL52y86SWNmbPX8LVbNs3EP4cNrIZgdjk2NAlP2ahNihozpoZIxSg==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"undici-types": "~7.14.0"
|
||||
}
|
||||
},
|
||||
"packages/coding-agent/node_modules/chalk": {
|
||||
"version": "5.6.2",
|
||||
"resolved": "https://registry.npmjs.org/chalk/-/chalk-5.6.2.tgz",
|
||||
"integrity": "sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": "^12.17.0 || ^14.13 || >=16.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/chalk/chalk?sponsor=1"
|
||||
}
|
||||
},
|
||||
"packages/coding-agent/node_modules/undici-types": {
|
||||
"version": "7.14.0",
|
||||
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-7.14.0.tgz",
|
||||
"integrity": "sha512-QQiYxHuyZ9gQUIrmPo3IA+hUl4KYk8uSA7cHrcKd/l3p1OTpZcM0Tbp9x7FAtXdAYhlasd60ncPpgu6ihG6TOA==",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"packages/pods": {
|
||||
"name": "@mariozechner/pi",
|
||||
"version": "0.5.44",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@mariozechner/pi-agent": "^0.5.44",
|
||||
"@mariozechner/pi-agent-old": "^0.5.44",
|
||||
"chalk": "^5.5.0"
|
||||
},
|
||||
"bin": {
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@
|
|||
],
|
||||
"scripts": {
|
||||
"clean": "npm run clean --workspaces",
|
||||
"build": "npm run build -w @mariozechner/pi-tui && npm run build -w @mariozechner/pi-ai && npm run build -w @mariozechner/pi-web-ui && npm run build -w @mariozechner/pi-agent && npm run build -w @mariozechner/pi-proxy && npm run build -w @mariozechner/pi",
|
||||
"build": "npm run build -w @mariozechner/pi-tui && npm run build -w @mariozechner/pi-ai && npm run build -w @mariozechner/pi-agent && npm run build -w @mariozechner/pi-agent-old && npm run build -w @mariozechner/coding-agent && npm run build -w @mariozechner/pi-web-ui && npm run build -w @mariozechner/pi-proxy && npm run build -w @mariozechner/pi",
|
||||
"dev": "concurrently --names \"ai,web-ui,tui,proxy\" --prefix-colors \"cyan,green,magenta,blue\" \"npm run dev -w @mariozechner/pi-ai\" \"npm run dev -w @mariozechner/pi-web-ui\" \"npm run dev -w @mariozechner/pi-tui\" \"npm run dev -w @mariozechner/pi-proxy\"",
|
||||
"check": "biome check --write . && npm run check --workspaces && tsc --noEmit",
|
||||
"test": "npm run test --workspaces --if-present",
|
||||
|
|
|
|||
47
packages/agent-old/package.json
Normal file
47
packages/agent-old/package.json
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
{
|
||||
"name": "@mariozechner/pi-agent-old",
|
||||
"version": "0.5.44",
|
||||
"description": "General-purpose agent with tool calling and session persistence",
|
||||
"type": "module",
|
||||
"bin": {
|
||||
"pi-agent": "dist/cli.js"
|
||||
},
|
||||
"main": "./dist/index.js",
|
||||
"types": "./dist/index.d.ts",
|
||||
"files": [
|
||||
"dist"
|
||||
],
|
||||
"scripts": {
|
||||
"clean": "rm -rf dist",
|
||||
"build": "tsc -p tsconfig.build.json && chmod +x dist/cli.js",
|
||||
"check": "biome check --write .",
|
||||
"prepublishOnly": "npm run clean && npm run build"
|
||||
},
|
||||
"dependencies": {
|
||||
"@mariozechner/pi-tui": "^0.5.44",
|
||||
"@types/glob": "^8.1.0",
|
||||
"chalk": "^5.5.0",
|
||||
"glob": "^11.0.3",
|
||||
"openai": "^5.12.2"
|
||||
},
|
||||
"devDependencies": {},
|
||||
"keywords": [
|
||||
"agent",
|
||||
"ai",
|
||||
"llm",
|
||||
"openai",
|
||||
"claude",
|
||||
"cli",
|
||||
"tui"
|
||||
],
|
||||
"author": "Mario Zechner",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+https://github.com/badlogic/pi-mono.git",
|
||||
"directory": "packages/agent"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=20.0.0"
|
||||
}
|
||||
}
|
||||
741
packages/agent-old/src/agent.ts
Normal file
741
packages/agent-old/src/agent.ts
Normal file
|
|
@ -0,0 +1,741 @@
|
|||
import OpenAI from "openai";
|
||||
import type { ResponseFunctionToolCallOutputItem } from "openai/resources/responses/responses.mjs";
|
||||
import type { SessionManager } from "./session-manager.js";
|
||||
import { executeTool, toolsForChat, toolsForResponses } from "./tools/tools.js";
|
||||
|
||||
export type AgentEvent =
|
||||
| { type: "session_start"; sessionId: string; model: string; api: string; baseURL: string; systemPrompt: string }
|
||||
| { type: "assistant_start" }
|
||||
| { type: "reasoning"; text: string }
|
||||
| { type: "tool_call"; toolCallId: string; name: string; args: string }
|
||||
| { type: "tool_result"; toolCallId: string; result: string; isError: boolean }
|
||||
| { type: "assistant_message"; text: string }
|
||||
| { type: "error"; message: string }
|
||||
| { type: "user_message"; text: string }
|
||||
| { type: "interrupted" }
|
||||
| {
|
||||
type: "token_usage";
|
||||
inputTokens: number;
|
||||
outputTokens: number;
|
||||
totalTokens: number;
|
||||
cacheReadTokens: number;
|
||||
cacheWriteTokens: number;
|
||||
reasoningTokens: number;
|
||||
};
|
||||
|
||||
export interface AgentEventReceiver {
|
||||
on(event: AgentEvent): Promise<void>;
|
||||
}
|
||||
|
||||
export interface AgentConfig {
|
||||
apiKey: string;
|
||||
baseURL: string;
|
||||
model: string;
|
||||
api: "completions" | "responses";
|
||||
systemPrompt: string;
|
||||
}
|
||||
|
||||
export interface ToolCall {
|
||||
name: string;
|
||||
arguments: string;
|
||||
id: string;
|
||||
}
|
||||
|
||||
// Cache for model reasoning support detection per API type
|
||||
const modelReasoningSupport = new Map<string, { completions?: boolean; responses?: boolean }>();
|
||||
|
||||
// Provider detection based on base URL
|
||||
function detectProvider(baseURL?: string): "openai" | "gemini" | "groq" | "anthropic" | "openrouter" | "other" {
|
||||
if (!baseURL) return "openai";
|
||||
if (baseURL.includes("api.openai.com")) return "openai";
|
||||
if (baseURL.includes("generativelanguage.googleapis.com")) return "gemini";
|
||||
if (baseURL.includes("api.groq.com")) return "groq";
|
||||
if (baseURL.includes("api.anthropic.com")) return "anthropic";
|
||||
if (baseURL.includes("openrouter.ai")) return "openrouter";
|
||||
return "other";
|
||||
}
|
||||
|
||||
// Parse provider-specific reasoning from message content
|
||||
function parseReasoningFromMessage(message: any, baseURL?: string): { cleanContent: string; reasoningTexts: string[] } {
|
||||
const provider = detectProvider(baseURL);
|
||||
const reasoningTexts: string[] = [];
|
||||
let cleanContent = message.content || "";
|
||||
|
||||
switch (provider) {
|
||||
case "gemini":
|
||||
// Gemini returns thinking in <thought> tags
|
||||
if (cleanContent.includes("<thought>")) {
|
||||
const thoughtMatches = cleanContent.matchAll(/<thought>([\s\S]*?)<\/thought>/g);
|
||||
for (const match of thoughtMatches) {
|
||||
reasoningTexts.push(match[1].trim());
|
||||
}
|
||||
// Remove all thought tags from the response
|
||||
cleanContent = cleanContent.replace(/<thought>[\s\S]*?<\/thought>/g, "").trim();
|
||||
}
|
||||
break;
|
||||
|
||||
case "groq":
|
||||
// Groq returns reasoning in a separate field when reasoning_format is "parsed"
|
||||
if (message.reasoning) {
|
||||
reasoningTexts.push(message.reasoning);
|
||||
}
|
||||
break;
|
||||
|
||||
case "openrouter":
|
||||
// OpenRouter returns reasoning in message.reasoning field
|
||||
if (message.reasoning) {
|
||||
reasoningTexts.push(message.reasoning);
|
||||
}
|
||||
break;
|
||||
|
||||
default:
|
||||
// Other providers don't embed reasoning in message content
|
||||
break;
|
||||
}
|
||||
|
||||
return { cleanContent, reasoningTexts };
|
||||
}
|
||||
|
||||
// Adjust request options based on provider-specific requirements
|
||||
function adjustRequestForProvider(
|
||||
requestOptions: any,
|
||||
api: "completions" | "responses",
|
||||
baseURL?: string,
|
||||
supportsReasoning?: boolean,
|
||||
): any {
|
||||
const provider = detectProvider(baseURL);
|
||||
|
||||
// Handle provider-specific adjustments
|
||||
switch (provider) {
|
||||
case "gemini":
|
||||
if (api === "completions" && supportsReasoning && requestOptions.reasoning_effort) {
|
||||
// Gemini needs extra_body for thinking content
|
||||
// Can't use both reasoning_effort and thinking_config
|
||||
const budget =
|
||||
requestOptions.reasoning_effort === "low"
|
||||
? 1024
|
||||
: requestOptions.reasoning_effort === "medium"
|
||||
? 8192
|
||||
: 24576;
|
||||
|
||||
requestOptions.extra_body = {
|
||||
google: {
|
||||
thinking_config: {
|
||||
thinking_budget: budget,
|
||||
include_thoughts: true,
|
||||
},
|
||||
},
|
||||
};
|
||||
// Remove reasoning_effort when using thinking_config
|
||||
delete requestOptions.reasoning_effort;
|
||||
}
|
||||
break;
|
||||
|
||||
case "groq":
|
||||
if (api === "responses" && requestOptions.reasoning) {
|
||||
// Groq responses API doesn't support reasoning.summary
|
||||
delete requestOptions.reasoning.summary;
|
||||
} else if (api === "completions" && supportsReasoning && requestOptions.reasoning_effort) {
|
||||
// Groq Chat Completions uses reasoning_format instead of reasoning_effort alone
|
||||
requestOptions.reasoning_format = "parsed";
|
||||
// Keep reasoning_effort for Groq
|
||||
}
|
||||
break;
|
||||
|
||||
case "anthropic":
|
||||
// Anthropic's OpenAI compatibility has its own quirks
|
||||
// But thinking content isn't available via OpenAI compat layer
|
||||
break;
|
||||
|
||||
case "openrouter":
|
||||
// OpenRouter uses a unified reasoning parameter format
|
||||
if (api === "completions" && supportsReasoning && requestOptions.reasoning_effort) {
|
||||
// Convert reasoning_effort to OpenRouter's reasoning format
|
||||
requestOptions.reasoning = {
|
||||
effort:
|
||||
requestOptions.reasoning_effort === "low"
|
||||
? "low"
|
||||
: requestOptions.reasoning_effort === "minimal"
|
||||
? "low"
|
||||
: requestOptions.reasoning_effort === "medium"
|
||||
? "medium"
|
||||
: "high",
|
||||
};
|
||||
delete requestOptions.reasoning_effort;
|
||||
}
|
||||
break;
|
||||
|
||||
default:
|
||||
// OpenAI and others use standard format
|
||||
break;
|
||||
}
|
||||
|
||||
return requestOptions;
|
||||
}
|
||||
|
||||
async function checkReasoningSupport(
|
||||
client: OpenAI,
|
||||
model: string,
|
||||
api: "completions" | "responses",
|
||||
baseURL?: string,
|
||||
signal?: AbortSignal,
|
||||
): Promise<boolean> {
|
||||
// Check if already aborted
|
||||
if (signal?.aborted) {
|
||||
throw new Error("Interrupted");
|
||||
}
|
||||
|
||||
// Check cache first
|
||||
const cacheKey = model;
|
||||
const cached = modelReasoningSupport.get(cacheKey);
|
||||
if (cached && cached[api] !== undefined) {
|
||||
return cached[api]!;
|
||||
}
|
||||
|
||||
let supportsReasoning = false;
|
||||
const provider = detectProvider(baseURL);
|
||||
|
||||
if (api === "responses") {
|
||||
// Try a minimal request with reasoning parameter for Responses API
|
||||
try {
|
||||
const testRequest: any = {
|
||||
model,
|
||||
input: "test",
|
||||
max_output_tokens: 1024,
|
||||
reasoning: {
|
||||
effort: "low", // Use low instead of minimal to ensure we get summaries
|
||||
},
|
||||
};
|
||||
await client.responses.create(testRequest, { signal });
|
||||
supportsReasoning = true;
|
||||
} catch (error) {
|
||||
supportsReasoning = false;
|
||||
}
|
||||
} else {
|
||||
// For Chat Completions API, try with reasoning parameter
|
||||
try {
|
||||
const testRequest: any = {
|
||||
model,
|
||||
messages: [{ role: "user", content: "test" }],
|
||||
max_completion_tokens: 1024,
|
||||
};
|
||||
|
||||
// Add provider-specific reasoning parameters
|
||||
if (provider === "gemini") {
|
||||
// Gemini uses extra_body for thinking
|
||||
testRequest.extra_body = {
|
||||
google: {
|
||||
thinking_config: {
|
||||
thinking_budget: 100, // Minimum viable budget for test
|
||||
include_thoughts: true,
|
||||
},
|
||||
},
|
||||
};
|
||||
} else if (provider === "groq") {
|
||||
// Groq uses both reasoning_format and reasoning_effort
|
||||
testRequest.reasoning_format = "parsed";
|
||||
testRequest.reasoning_effort = "low";
|
||||
} else {
|
||||
// Others use reasoning_effort
|
||||
testRequest.reasoning_effort = "minimal";
|
||||
}
|
||||
|
||||
await client.chat.completions.create(testRequest, { signal });
|
||||
supportsReasoning = true;
|
||||
} catch (error) {
|
||||
supportsReasoning = false;
|
||||
}
|
||||
}
|
||||
|
||||
// Update cache
|
||||
const existing = modelReasoningSupport.get(cacheKey) || {};
|
||||
existing[api] = supportsReasoning;
|
||||
modelReasoningSupport.set(cacheKey, existing);
|
||||
|
||||
return supportsReasoning;
|
||||
}
|
||||
|
||||
export async function callModelResponsesApi(
|
||||
client: OpenAI,
|
||||
model: string,
|
||||
messages: any[],
|
||||
signal?: AbortSignal,
|
||||
eventReceiver?: AgentEventReceiver,
|
||||
supportsReasoning?: boolean,
|
||||
baseURL?: string,
|
||||
): Promise<void> {
|
||||
let conversationDone = false;
|
||||
|
||||
while (!conversationDone) {
|
||||
// Check if we've been interrupted
|
||||
if (signal?.aborted) {
|
||||
throw new Error("Interrupted");
|
||||
}
|
||||
|
||||
// Build request options
|
||||
let requestOptions: any = {
|
||||
model,
|
||||
input: messages,
|
||||
tools: toolsForResponses as any,
|
||||
tool_choice: "auto",
|
||||
parallel_tool_calls: true,
|
||||
max_output_tokens: 2000, // TODO make configurable
|
||||
...(supportsReasoning && {
|
||||
reasoning: {
|
||||
effort: "minimal", // Use minimal effort for responses API
|
||||
summary: "detailed", // Request detailed reasoning summaries
|
||||
},
|
||||
}),
|
||||
};
|
||||
|
||||
// Apply provider-specific adjustments
|
||||
requestOptions = adjustRequestForProvider(requestOptions, "responses", baseURL, supportsReasoning);
|
||||
|
||||
const response = await client.responses.create(requestOptions, { signal });
|
||||
|
||||
// Report token usage if available (responses API format)
|
||||
if (response.usage) {
|
||||
const usage = response.usage;
|
||||
eventReceiver?.on({
|
||||
type: "token_usage",
|
||||
inputTokens: usage.input_tokens || 0,
|
||||
outputTokens: usage.output_tokens || 0,
|
||||
totalTokens: usage.total_tokens || 0,
|
||||
cacheReadTokens: usage.input_tokens_details?.cached_tokens || 0,
|
||||
cacheWriteTokens: 0, // Not available in API
|
||||
reasoningTokens: usage.output_tokens_details?.reasoning_tokens || 0,
|
||||
});
|
||||
}
|
||||
|
||||
const output = response.output;
|
||||
if (!output) break;
|
||||
|
||||
for (const item of output) {
|
||||
// gpt-oss vLLM quirk: need to remove type from "message" events
|
||||
if (item.id === "message") {
|
||||
const { type, ...message } = item;
|
||||
messages.push(item);
|
||||
} else {
|
||||
messages.push(item);
|
||||
}
|
||||
|
||||
switch (item.type) {
|
||||
case "reasoning": {
|
||||
// Handle both content (o1/o3) and summary (gpt-5) formats
|
||||
const reasoningItems = item.content || item.summary || [];
|
||||
for (const content of reasoningItems) {
|
||||
if (content.type === "reasoning_text" || content.type === "summary_text") {
|
||||
await eventReceiver?.on({ type: "reasoning", text: content.text });
|
||||
}
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case "message": {
|
||||
for (const content of item.content || []) {
|
||||
if (content.type === "output_text") {
|
||||
await eventReceiver?.on({ type: "assistant_message", text: content.text });
|
||||
} else if (content.type === "refusal") {
|
||||
await eventReceiver?.on({ type: "error", message: `Refusal: ${content.refusal}` });
|
||||
}
|
||||
conversationDone = true;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case "function_call": {
|
||||
if (signal?.aborted) {
|
||||
throw new Error("Interrupted");
|
||||
}
|
||||
|
||||
try {
|
||||
await eventReceiver?.on({
|
||||
type: "tool_call",
|
||||
toolCallId: item.call_id || "",
|
||||
name: item.name,
|
||||
args: item.arguments,
|
||||
});
|
||||
const result = await executeTool(item.name, item.arguments, signal);
|
||||
await eventReceiver?.on({
|
||||
type: "tool_result",
|
||||
toolCallId: item.call_id || "",
|
||||
result,
|
||||
isError: false,
|
||||
});
|
||||
|
||||
// Add tool result to messages
|
||||
const toolResultMsg = {
|
||||
type: "function_call_output",
|
||||
call_id: item.call_id,
|
||||
output: result,
|
||||
} as ResponseFunctionToolCallOutputItem;
|
||||
messages.push(toolResultMsg);
|
||||
} catch (e: any) {
|
||||
await eventReceiver?.on({
|
||||
type: "tool_result",
|
||||
toolCallId: item.call_id || "",
|
||||
result: e.message,
|
||||
isError: true,
|
||||
});
|
||||
const errorMsg = {
|
||||
type: "function_call_output",
|
||||
call_id: item.id,
|
||||
output: e.message,
|
||||
isError: true,
|
||||
};
|
||||
messages.push(errorMsg);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
default: {
|
||||
eventReceiver?.on({ type: "error", message: `Unknown output type in LLM response: ${item.type}` });
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export async function callModelChatCompletionsApi(
|
||||
client: OpenAI,
|
||||
model: string,
|
||||
messages: any[],
|
||||
signal?: AbortSignal,
|
||||
eventReceiver?: AgentEventReceiver,
|
||||
supportsReasoning?: boolean,
|
||||
baseURL?: string,
|
||||
): Promise<void> {
|
||||
let assistantResponded = false;
|
||||
|
||||
while (!assistantResponded) {
|
||||
if (signal?.aborted) {
|
||||
throw new Error("Interrupted");
|
||||
}
|
||||
|
||||
// Build request options
|
||||
let requestOptions: any = {
|
||||
model,
|
||||
messages,
|
||||
tools: toolsForChat,
|
||||
tool_choice: "auto",
|
||||
max_completion_tokens: 2000, // TODO make configurable
|
||||
...(supportsReasoning && {
|
||||
reasoning_effort: "low", // Use low effort for completions API
|
||||
}),
|
||||
};
|
||||
|
||||
// Apply provider-specific adjustments
|
||||
requestOptions = adjustRequestForProvider(requestOptions, "completions", baseURL, supportsReasoning);
|
||||
|
||||
const response = await client.chat.completions.create(requestOptions, { signal });
|
||||
|
||||
const message = response.choices[0].message;
|
||||
|
||||
// Report token usage if available
|
||||
if (response.usage) {
|
||||
const usage = response.usage;
|
||||
await eventReceiver?.on({
|
||||
type: "token_usage",
|
||||
inputTokens: usage.prompt_tokens || 0,
|
||||
outputTokens: usage.completion_tokens || 0,
|
||||
totalTokens: usage.total_tokens || 0,
|
||||
cacheReadTokens: usage.prompt_tokens_details?.cached_tokens || 0,
|
||||
cacheWriteTokens: 0, // Not available in API
|
||||
reasoningTokens: usage.completion_tokens_details?.reasoning_tokens || 0,
|
||||
});
|
||||
}
|
||||
|
||||
if (message.tool_calls && message.tool_calls.length > 0) {
|
||||
// Add assistant message with tool calls to history
|
||||
const assistantMsg: any = {
|
||||
role: "assistant",
|
||||
content: message.content || null,
|
||||
tool_calls: message.tool_calls,
|
||||
};
|
||||
messages.push(assistantMsg);
|
||||
|
||||
// Display and execute each tool call
|
||||
for (const toolCall of message.tool_calls) {
|
||||
// Check if interrupted before executing tool
|
||||
if (signal?.aborted) {
|
||||
throw new Error("Interrupted");
|
||||
}
|
||||
|
||||
try {
|
||||
const funcName = toolCall.type === "function" ? toolCall.function.name : toolCall.custom.name;
|
||||
const funcArgs = toolCall.type === "function" ? toolCall.function.arguments : toolCall.custom.input;
|
||||
|
||||
await eventReceiver?.on({ type: "tool_call", toolCallId: toolCall.id, name: funcName, args: funcArgs });
|
||||
const result = await executeTool(funcName, funcArgs, signal);
|
||||
await eventReceiver?.on({ type: "tool_result", toolCallId: toolCall.id, result, isError: false });
|
||||
|
||||
// Add tool result to messages
|
||||
const toolMsg = {
|
||||
role: "tool",
|
||||
tool_call_id: toolCall.id,
|
||||
content: result,
|
||||
};
|
||||
messages.push(toolMsg);
|
||||
} catch (e: any) {
|
||||
eventReceiver?.on({ type: "tool_result", toolCallId: toolCall.id, result: e.message, isError: true });
|
||||
const errorMsg = {
|
||||
role: "tool",
|
||||
tool_call_id: toolCall.id,
|
||||
content: e.message,
|
||||
};
|
||||
messages.push(errorMsg);
|
||||
}
|
||||
}
|
||||
} else if (message.content) {
|
||||
// Parse provider-specific reasoning from message
|
||||
const { cleanContent, reasoningTexts } = parseReasoningFromMessage(message, baseURL);
|
||||
|
||||
// Emit reasoning events if any
|
||||
for (const reasoning of reasoningTexts) {
|
||||
await eventReceiver?.on({ type: "reasoning", text: reasoning });
|
||||
}
|
||||
|
||||
// Emit the cleaned assistant message
|
||||
await eventReceiver?.on({ type: "assistant_message", text: cleanContent });
|
||||
const finalMsg = { role: "assistant", content: cleanContent };
|
||||
messages.push(finalMsg);
|
||||
assistantResponded = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export class Agent {
|
||||
private client: OpenAI;
|
||||
public readonly config: AgentConfig;
|
||||
private messages: any[] = [];
|
||||
private renderer?: AgentEventReceiver;
|
||||
private sessionManager?: SessionManager;
|
||||
private comboReceiver: AgentEventReceiver;
|
||||
private abortController: AbortController | null = null;
|
||||
private supportsReasoning: boolean | null = null;
|
||||
|
||||
constructor(config: AgentConfig, renderer?: AgentEventReceiver, sessionManager?: SessionManager) {
|
||||
this.config = config;
|
||||
this.client = new OpenAI({
|
||||
apiKey: config.apiKey,
|
||||
baseURL: config.baseURL,
|
||||
});
|
||||
|
||||
// Use provided renderer or default to console
|
||||
this.renderer = renderer;
|
||||
this.sessionManager = sessionManager;
|
||||
|
||||
this.comboReceiver = {
|
||||
on: async (event: AgentEvent): Promise<void> => {
|
||||
await this.renderer?.on(event);
|
||||
await this.sessionManager?.on(event);
|
||||
},
|
||||
};
|
||||
|
||||
// Initialize with system prompt if provided
|
||||
if (config.systemPrompt) {
|
||||
this.messages.push({
|
||||
role: "developer",
|
||||
content: config.systemPrompt,
|
||||
});
|
||||
}
|
||||
|
||||
// Start session logging if we have a session manager
|
||||
if (sessionManager) {
|
||||
sessionManager.startSession(this.config);
|
||||
|
||||
// Emit session_start event
|
||||
this.comboReceiver.on({
|
||||
type: "session_start",
|
||||
sessionId: sessionManager.getSessionId(),
|
||||
model: config.model,
|
||||
api: config.api,
|
||||
baseURL: config.baseURL,
|
||||
systemPrompt: config.systemPrompt,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
async ask(userMessage: string): Promise<void> {
|
||||
// Render user message through the event system
|
||||
this.comboReceiver.on({ type: "user_message", text: userMessage });
|
||||
|
||||
// Add user message
|
||||
const userMsg = { role: "user", content: userMessage };
|
||||
this.messages.push(userMsg);
|
||||
|
||||
// Create a new AbortController for this chat session
|
||||
this.abortController = new AbortController();
|
||||
|
||||
try {
|
||||
await this.comboReceiver.on({ type: "assistant_start" });
|
||||
|
||||
// Check reasoning support only once per agent instance
|
||||
if (this.supportsReasoning === null) {
|
||||
this.supportsReasoning = await checkReasoningSupport(
|
||||
this.client,
|
||||
this.config.model,
|
||||
this.config.api,
|
||||
this.config.baseURL,
|
||||
this.abortController.signal,
|
||||
);
|
||||
}
|
||||
|
||||
if (this.config.api === "responses") {
|
||||
await callModelResponsesApi(
|
||||
this.client,
|
||||
this.config.model,
|
||||
this.messages,
|
||||
this.abortController.signal,
|
||||
this.comboReceiver,
|
||||
this.supportsReasoning,
|
||||
this.config.baseURL,
|
||||
);
|
||||
} else {
|
||||
await callModelChatCompletionsApi(
|
||||
this.client,
|
||||
this.config.model,
|
||||
this.messages,
|
||||
this.abortController.signal,
|
||||
this.comboReceiver,
|
||||
this.supportsReasoning,
|
||||
this.config.baseURL,
|
||||
);
|
||||
}
|
||||
} catch (e) {
|
||||
// Check if this was an interruption by checking the abort signal
|
||||
if (this.abortController.signal.aborted) {
|
||||
// Emit interrupted event so UI can clean up properly
|
||||
await this.comboReceiver?.on({ type: "interrupted" });
|
||||
return;
|
||||
}
|
||||
throw e;
|
||||
} finally {
|
||||
this.abortController = null;
|
||||
}
|
||||
}
|
||||
|
||||
interrupt(): void {
|
||||
this.abortController?.abort();
|
||||
}
|
||||
|
||||
setEvents(events: AgentEvent[]): void {
|
||||
// Reconstruct messages from events based on API type
|
||||
this.messages = [];
|
||||
|
||||
if (this.config.api === "responses") {
|
||||
// Responses API format
|
||||
if (this.config.systemPrompt) {
|
||||
this.messages.push({
|
||||
role: "developer",
|
||||
content: this.config.systemPrompt,
|
||||
});
|
||||
}
|
||||
|
||||
for (const event of events) {
|
||||
switch (event.type) {
|
||||
case "user_message":
|
||||
this.messages.push({
|
||||
role: "user",
|
||||
content: [{ type: "input_text", text: event.text }],
|
||||
});
|
||||
break;
|
||||
|
||||
case "reasoning":
|
||||
// Add reasoning message
|
||||
this.messages.push({
|
||||
type: "reasoning",
|
||||
content: [{ type: "reasoning_text", text: event.text }],
|
||||
});
|
||||
break;
|
||||
|
||||
case "tool_call":
|
||||
// Add function call
|
||||
this.messages.push({
|
||||
type: "function_call",
|
||||
id: event.toolCallId,
|
||||
name: event.name,
|
||||
arguments: event.args,
|
||||
});
|
||||
break;
|
||||
|
||||
case "tool_result":
|
||||
// Add function result
|
||||
this.messages.push({
|
||||
type: "function_call_output",
|
||||
call_id: event.toolCallId,
|
||||
output: event.result,
|
||||
});
|
||||
break;
|
||||
|
||||
case "assistant_message":
|
||||
// Add final message
|
||||
this.messages.push({
|
||||
type: "message",
|
||||
content: [{ type: "output_text", text: event.text }],
|
||||
});
|
||||
break;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Chat Completions API format
|
||||
if (this.config.systemPrompt) {
|
||||
this.messages.push({ role: "system", content: this.config.systemPrompt });
|
||||
}
|
||||
|
||||
// Track tool calls in progress
|
||||
let pendingToolCalls: any[] = [];
|
||||
|
||||
for (const event of events) {
|
||||
switch (event.type) {
|
||||
case "user_message":
|
||||
this.messages.push({ role: "user", content: event.text });
|
||||
break;
|
||||
|
||||
case "assistant_start":
|
||||
// Reset pending tool calls for new assistant response
|
||||
pendingToolCalls = [];
|
||||
break;
|
||||
|
||||
case "tool_call":
|
||||
// Accumulate tool calls
|
||||
pendingToolCalls.push({
|
||||
id: event.toolCallId,
|
||||
type: "function",
|
||||
function: {
|
||||
name: event.name,
|
||||
arguments: event.args,
|
||||
},
|
||||
});
|
||||
break;
|
||||
|
||||
case "tool_result":
|
||||
// When we see the first tool result, add the assistant message with all tool calls
|
||||
if (pendingToolCalls.length > 0) {
|
||||
this.messages.push({
|
||||
role: "assistant",
|
||||
content: null,
|
||||
tool_calls: pendingToolCalls,
|
||||
});
|
||||
pendingToolCalls = [];
|
||||
}
|
||||
// Add the tool result
|
||||
this.messages.push({
|
||||
role: "tool",
|
||||
tool_call_id: event.toolCallId,
|
||||
content: event.result,
|
||||
});
|
||||
break;
|
||||
|
||||
case "assistant_message":
|
||||
// Final assistant response (no tool calls)
|
||||
this.messages.push({ role: "assistant", content: event.text });
|
||||
break;
|
||||
|
||||
// Skip other event types (thinking, error, interrupted, token_usage)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
15
packages/agent-old/src/index.ts
Normal file
15
packages/agent-old/src/index.ts
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
// Main exports for pi-agent package
|
||||
|
||||
export type { AgentConfig, AgentEvent, AgentEventReceiver } from "./agent.js";
|
||||
export { Agent } from "./agent.js";
|
||||
export type { ArgDef, ArgDefs, ParsedArgs } from "./args.js";
|
||||
// CLI utilities
|
||||
export { parseArgs, printHelp } from "./args.js";
|
||||
// CLI main function
|
||||
export { main } from "./main.js";
|
||||
// Renderers
|
||||
export { ConsoleRenderer } from "./renderers/console-renderer.js";
|
||||
export { JsonRenderer } from "./renderers/json-renderer.js";
|
||||
export { TuiRenderer } from "./renderers/tui-renderer.js";
|
||||
export type { SessionData, SessionEvent, SessionHeader } from "./session-manager.js";
|
||||
export { SessionManager } from "./session-manager.js";
|
||||
9
packages/agent-old/tsconfig.build.json
Normal file
9
packages/agent-old/tsconfig.build.json
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
{
|
||||
"extends": "../../tsconfig.base.json",
|
||||
"compilerOptions": {
|
||||
"outDir": "./dist",
|
||||
"rootDir": "./src"
|
||||
},
|
||||
"include": ["src/**/*"],
|
||||
"exclude": ["node_modules", "dist"]
|
||||
}
|
||||
|
|
@ -1,38 +1,31 @@
|
|||
{
|
||||
"name": "@mariozechner/pi-agent",
|
||||
"version": "0.5.44",
|
||||
"description": "General-purpose agent with tool calling and session persistence",
|
||||
"description": "General-purpose agent with transport abstraction, state management, and attachment support",
|
||||
"type": "module",
|
||||
"bin": {
|
||||
"pi-agent": "dist/cli.js"
|
||||
},
|
||||
"main": "./dist/index.js",
|
||||
"types": "./dist/index.d.ts",
|
||||
"files": [
|
||||
"dist"
|
||||
"dist",
|
||||
"README.md"
|
||||
],
|
||||
"scripts": {
|
||||
"clean": "rm -rf dist",
|
||||
"build": "tsc -p tsconfig.build.json && chmod +x dist/cli.js",
|
||||
"check": "biome check --write .",
|
||||
"build": "tsc -p tsconfig.build.json",
|
||||
"dev": "tsc -p tsconfig.build.json --watch --preserveWatchOutput",
|
||||
"check": "tsc --noEmit",
|
||||
"test": "vitest --run",
|
||||
"prepublishOnly": "npm run clean && npm run build"
|
||||
},
|
||||
"dependencies": {
|
||||
"@mariozechner/pi-tui": "^0.5.44",
|
||||
"@types/glob": "^8.1.0",
|
||||
"chalk": "^5.5.0",
|
||||
"glob": "^11.0.3",
|
||||
"openai": "^5.12.2"
|
||||
"@mariozechner/pi-ai": "^0.5.44"
|
||||
},
|
||||
"devDependencies": {},
|
||||
"keywords": [
|
||||
"agent",
|
||||
"ai",
|
||||
"agent",
|
||||
"llm",
|
||||
"openai",
|
||||
"claude",
|
||||
"cli",
|
||||
"tui"
|
||||
"transport",
|
||||
"state-management"
|
||||
],
|
||||
"author": "Mario Zechner",
|
||||
"license": "MIT",
|
||||
|
|
@ -43,5 +36,10 @@
|
|||
},
|
||||
"engines": {
|
||||
"node": ">=20.0.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^24.3.0",
|
||||
"typescript": "^5.7.3",
|
||||
"vitest": "^3.2.4"
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,741 +1,283 @@
|
|||
import OpenAI from "openai";
|
||||
import type { ResponseFunctionToolCallOutputItem } from "openai/resources/responses/responses.mjs";
|
||||
import type { SessionManager } from "./session-manager.js";
|
||||
import { executeTool, toolsForChat, toolsForResponses } from "./tools/tools.js";
|
||||
import type { ImageContent, Message, QueuedMessage, TextContent } from "@mariozechner/pi-ai";
|
||||
import { getModel } from "@mariozechner/pi-ai";
|
||||
import type { AgentTransport } from "./transports/types.js";
|
||||
import type { AgentEvent, AgentState, AppMessage, Attachment, ThinkingLevel } from "./types.js";
|
||||
|
||||
export type AgentEvent =
|
||||
| { type: "session_start"; sessionId: string; model: string; api: string; baseURL: string; systemPrompt: string }
|
||||
| { type: "assistant_start" }
|
||||
| { type: "reasoning"; text: string }
|
||||
| { type: "tool_call"; toolCallId: string; name: string; args: string }
|
||||
| { type: "tool_result"; toolCallId: string; result: string; isError: boolean }
|
||||
| { type: "assistant_message"; text: string }
|
||||
| { type: "error"; message: string }
|
||||
| { type: "user_message"; text: string }
|
||||
| { type: "interrupted" }
|
||||
| {
|
||||
type: "token_usage";
|
||||
inputTokens: number;
|
||||
outputTokens: number;
|
||||
totalTokens: number;
|
||||
cacheReadTokens: number;
|
||||
cacheWriteTokens: number;
|
||||
reasoningTokens: number;
|
||||
};
|
||||
/**
|
||||
* Default message transformer: Keep only LLM-compatible messages, strip app-specific fields.
|
||||
* Converts attachments to proper content blocks (images → ImageContent, documents → TextContent).
|
||||
*/
|
||||
function defaultMessageTransformer(messages: AppMessage[]): Message[] {
|
||||
return messages
|
||||
.filter((m) => {
|
||||
// Only keep standard LLM message roles
|
||||
return m.role === "user" || m.role === "assistant" || m.role === "toolResult";
|
||||
})
|
||||
.map((m) => {
|
||||
if (m.role === "user") {
|
||||
const { attachments, ...rest } = m as any;
|
||||
|
||||
export interface AgentEventReceiver {
|
||||
on(event: AgentEvent): Promise<void>;
|
||||
}
|
||||
|
||||
export interface AgentConfig {
|
||||
apiKey: string;
|
||||
baseURL: string;
|
||||
model: string;
|
||||
api: "completions" | "responses";
|
||||
systemPrompt: string;
|
||||
}
|
||||
|
||||
export interface ToolCall {
|
||||
name: string;
|
||||
arguments: string;
|
||||
id: string;
|
||||
}
|
||||
|
||||
// Cache for model reasoning support detection per API type
|
||||
const modelReasoningSupport = new Map<string, { completions?: boolean; responses?: boolean }>();
|
||||
|
||||
// Provider detection based on base URL
|
||||
function detectProvider(baseURL?: string): "openai" | "gemini" | "groq" | "anthropic" | "openrouter" | "other" {
|
||||
if (!baseURL) return "openai";
|
||||
if (baseURL.includes("api.openai.com")) return "openai";
|
||||
if (baseURL.includes("generativelanguage.googleapis.com")) return "gemini";
|
||||
if (baseURL.includes("api.groq.com")) return "groq";
|
||||
if (baseURL.includes("api.anthropic.com")) return "anthropic";
|
||||
if (baseURL.includes("openrouter.ai")) return "openrouter";
|
||||
return "other";
|
||||
}
|
||||
|
||||
// Parse provider-specific reasoning from message content
|
||||
function parseReasoningFromMessage(message: any, baseURL?: string): { cleanContent: string; reasoningTexts: string[] } {
|
||||
const provider = detectProvider(baseURL);
|
||||
const reasoningTexts: string[] = [];
|
||||
let cleanContent = message.content || "";
|
||||
|
||||
switch (provider) {
|
||||
case "gemini":
|
||||
// Gemini returns thinking in <thought> tags
|
||||
if (cleanContent.includes("<thought>")) {
|
||||
const thoughtMatches = cleanContent.matchAll(/<thought>([\s\S]*?)<\/thought>/g);
|
||||
for (const match of thoughtMatches) {
|
||||
reasoningTexts.push(match[1].trim());
|
||||
// If no attachments, return as-is
|
||||
if (!attachments || attachments.length === 0) {
|
||||
return rest as Message;
|
||||
}
|
||||
// Remove all thought tags from the response
|
||||
cleanContent = cleanContent.replace(/<thought>[\s\S]*?<\/thought>/g, "").trim();
|
||||
}
|
||||
break;
|
||||
|
||||
case "groq":
|
||||
// Groq returns reasoning in a separate field when reasoning_format is "parsed"
|
||||
if (message.reasoning) {
|
||||
reasoningTexts.push(message.reasoning);
|
||||
}
|
||||
break;
|
||||
// Convert attachments to content blocks
|
||||
const content = Array.isArray(rest.content) ? [...rest.content] : [{ type: "text", text: rest.content }];
|
||||
|
||||
case "openrouter":
|
||||
// OpenRouter returns reasoning in message.reasoning field
|
||||
if (message.reasoning) {
|
||||
reasoningTexts.push(message.reasoning);
|
||||
}
|
||||
break;
|
||||
|
||||
default:
|
||||
// Other providers don't embed reasoning in message content
|
||||
break;
|
||||
}
|
||||
|
||||
return { cleanContent, reasoningTexts };
|
||||
}
|
||||
|
||||
// Adjust request options based on provider-specific requirements
|
||||
function adjustRequestForProvider(
|
||||
requestOptions: any,
|
||||
api: "completions" | "responses",
|
||||
baseURL?: string,
|
||||
supportsReasoning?: boolean,
|
||||
): any {
|
||||
const provider = detectProvider(baseURL);
|
||||
|
||||
// Handle provider-specific adjustments
|
||||
switch (provider) {
|
||||
case "gemini":
|
||||
if (api === "completions" && supportsReasoning && requestOptions.reasoning_effort) {
|
||||
// Gemini needs extra_body for thinking content
|
||||
// Can't use both reasoning_effort and thinking_config
|
||||
const budget =
|
||||
requestOptions.reasoning_effort === "low"
|
||||
? 1024
|
||||
: requestOptions.reasoning_effort === "medium"
|
||||
? 8192
|
||||
: 24576;
|
||||
|
||||
requestOptions.extra_body = {
|
||||
google: {
|
||||
thinking_config: {
|
||||
thinking_budget: budget,
|
||||
include_thoughts: true,
|
||||
},
|
||||
},
|
||||
};
|
||||
// Remove reasoning_effort when using thinking_config
|
||||
delete requestOptions.reasoning_effort;
|
||||
}
|
||||
break;
|
||||
|
||||
case "groq":
|
||||
if (api === "responses" && requestOptions.reasoning) {
|
||||
// Groq responses API doesn't support reasoning.summary
|
||||
delete requestOptions.reasoning.summary;
|
||||
} else if (api === "completions" && supportsReasoning && requestOptions.reasoning_effort) {
|
||||
// Groq Chat Completions uses reasoning_format instead of reasoning_effort alone
|
||||
requestOptions.reasoning_format = "parsed";
|
||||
// Keep reasoning_effort for Groq
|
||||
}
|
||||
break;
|
||||
|
||||
case "anthropic":
|
||||
// Anthropic's OpenAI compatibility has its own quirks
|
||||
// But thinking content isn't available via OpenAI compat layer
|
||||
break;
|
||||
|
||||
case "openrouter":
|
||||
// OpenRouter uses a unified reasoning parameter format
|
||||
if (api === "completions" && supportsReasoning && requestOptions.reasoning_effort) {
|
||||
// Convert reasoning_effort to OpenRouter's reasoning format
|
||||
requestOptions.reasoning = {
|
||||
effort:
|
||||
requestOptions.reasoning_effort === "low"
|
||||
? "low"
|
||||
: requestOptions.reasoning_effort === "minimal"
|
||||
? "low"
|
||||
: requestOptions.reasoning_effort === "medium"
|
||||
? "medium"
|
||||
: "high",
|
||||
};
|
||||
delete requestOptions.reasoning_effort;
|
||||
}
|
||||
break;
|
||||
|
||||
default:
|
||||
// OpenAI and others use standard format
|
||||
break;
|
||||
}
|
||||
|
||||
return requestOptions;
|
||||
}
|
||||
|
||||
async function checkReasoningSupport(
|
||||
client: OpenAI,
|
||||
model: string,
|
||||
api: "completions" | "responses",
|
||||
baseURL?: string,
|
||||
signal?: AbortSignal,
|
||||
): Promise<boolean> {
|
||||
// Check if already aborted
|
||||
if (signal?.aborted) {
|
||||
throw new Error("Interrupted");
|
||||
}
|
||||
|
||||
// Check cache first
|
||||
const cacheKey = model;
|
||||
const cached = modelReasoningSupport.get(cacheKey);
|
||||
if (cached && cached[api] !== undefined) {
|
||||
return cached[api]!;
|
||||
}
|
||||
|
||||
let supportsReasoning = false;
|
||||
const provider = detectProvider(baseURL);
|
||||
|
||||
if (api === "responses") {
|
||||
// Try a minimal request with reasoning parameter for Responses API
|
||||
try {
|
||||
const testRequest: any = {
|
||||
model,
|
||||
input: "test",
|
||||
max_output_tokens: 1024,
|
||||
reasoning: {
|
||||
effort: "low", // Use low instead of minimal to ensure we get summaries
|
||||
},
|
||||
};
|
||||
await client.responses.create(testRequest, { signal });
|
||||
supportsReasoning = true;
|
||||
} catch (error) {
|
||||
supportsReasoning = false;
|
||||
}
|
||||
} else {
|
||||
// For Chat Completions API, try with reasoning parameter
|
||||
try {
|
||||
const testRequest: any = {
|
||||
model,
|
||||
messages: [{ role: "user", content: "test" }],
|
||||
max_completion_tokens: 1024,
|
||||
};
|
||||
|
||||
// Add provider-specific reasoning parameters
|
||||
if (provider === "gemini") {
|
||||
// Gemini uses extra_body for thinking
|
||||
testRequest.extra_body = {
|
||||
google: {
|
||||
thinking_config: {
|
||||
thinking_budget: 100, // Minimum viable budget for test
|
||||
include_thoughts: true,
|
||||
},
|
||||
},
|
||||
};
|
||||
} else if (provider === "groq") {
|
||||
// Groq uses both reasoning_format and reasoning_effort
|
||||
testRequest.reasoning_format = "parsed";
|
||||
testRequest.reasoning_effort = "low";
|
||||
} else {
|
||||
// Others use reasoning_effort
|
||||
testRequest.reasoning_effort = "minimal";
|
||||
}
|
||||
|
||||
await client.chat.completions.create(testRequest, { signal });
|
||||
supportsReasoning = true;
|
||||
} catch (error) {
|
||||
supportsReasoning = false;
|
||||
}
|
||||
}
|
||||
|
||||
// Update cache
|
||||
const existing = modelReasoningSupport.get(cacheKey) || {};
|
||||
existing[api] = supportsReasoning;
|
||||
modelReasoningSupport.set(cacheKey, existing);
|
||||
|
||||
return supportsReasoning;
|
||||
}
|
||||
|
||||
export async function callModelResponsesApi(
|
||||
client: OpenAI,
|
||||
model: string,
|
||||
messages: any[],
|
||||
signal?: AbortSignal,
|
||||
eventReceiver?: AgentEventReceiver,
|
||||
supportsReasoning?: boolean,
|
||||
baseURL?: string,
|
||||
): Promise<void> {
|
||||
let conversationDone = false;
|
||||
|
||||
while (!conversationDone) {
|
||||
// Check if we've been interrupted
|
||||
if (signal?.aborted) {
|
||||
throw new Error("Interrupted");
|
||||
}
|
||||
|
||||
// Build request options
|
||||
let requestOptions: any = {
|
||||
model,
|
||||
input: messages,
|
||||
tools: toolsForResponses as any,
|
||||
tool_choice: "auto",
|
||||
parallel_tool_calls: true,
|
||||
max_output_tokens: 2000, // TODO make configurable
|
||||
...(supportsReasoning && {
|
||||
reasoning: {
|
||||
effort: "minimal", // Use minimal effort for responses API
|
||||
summary: "detailed", // Request detailed reasoning summaries
|
||||
},
|
||||
}),
|
||||
};
|
||||
|
||||
// Apply provider-specific adjustments
|
||||
requestOptions = adjustRequestForProvider(requestOptions, "responses", baseURL, supportsReasoning);
|
||||
|
||||
const response = await client.responses.create(requestOptions, { signal });
|
||||
|
||||
// Report token usage if available (responses API format)
|
||||
if (response.usage) {
|
||||
const usage = response.usage;
|
||||
eventReceiver?.on({
|
||||
type: "token_usage",
|
||||
inputTokens: usage.input_tokens || 0,
|
||||
outputTokens: usage.output_tokens || 0,
|
||||
totalTokens: usage.total_tokens || 0,
|
||||
cacheReadTokens: usage.input_tokens_details?.cached_tokens || 0,
|
||||
cacheWriteTokens: 0, // Not available in API
|
||||
reasoningTokens: usage.output_tokens_details?.reasoning_tokens || 0,
|
||||
});
|
||||
}
|
||||
|
||||
const output = response.output;
|
||||
if (!output) break;
|
||||
|
||||
for (const item of output) {
|
||||
// gpt-oss vLLM quirk: need to remove type from "message" events
|
||||
if (item.id === "message") {
|
||||
const { type, ...message } = item;
|
||||
messages.push(item);
|
||||
} else {
|
||||
messages.push(item);
|
||||
}
|
||||
|
||||
switch (item.type) {
|
||||
case "reasoning": {
|
||||
// Handle both content (o1/o3) and summary (gpt-5) formats
|
||||
const reasoningItems = item.content || item.summary || [];
|
||||
for (const content of reasoningItems) {
|
||||
if (content.type === "reasoning_text" || content.type === "summary_text") {
|
||||
await eventReceiver?.on({ type: "reasoning", text: content.text });
|
||||
}
|
||||
for (const attachment of attachments as Attachment[]) {
|
||||
// Add image blocks for image attachments
|
||||
if (attachment.type === "image") {
|
||||
content.push({
|
||||
type: "image",
|
||||
data: attachment.content,
|
||||
mimeType: attachment.mimeType,
|
||||
} as ImageContent);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case "message": {
|
||||
for (const content of item.content || []) {
|
||||
if (content.type === "output_text") {
|
||||
await eventReceiver?.on({ type: "assistant_message", text: content.text });
|
||||
} else if (content.type === "refusal") {
|
||||
await eventReceiver?.on({ type: "error", message: `Refusal: ${content.refusal}` });
|
||||
}
|
||||
conversationDone = true;
|
||||
// Add text blocks for documents with extracted text
|
||||
else if (attachment.type === "document" && attachment.extractedText) {
|
||||
content.push({
|
||||
type: "text",
|
||||
text: `\n\n[Document: ${attachment.fileName}]\n${attachment.extractedText}`,
|
||||
isDocument: true,
|
||||
} as TextContent);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case "function_call": {
|
||||
if (signal?.aborted) {
|
||||
throw new Error("Interrupted");
|
||||
}
|
||||
|
||||
try {
|
||||
await eventReceiver?.on({
|
||||
type: "tool_call",
|
||||
toolCallId: item.call_id || "",
|
||||
name: item.name,
|
||||
args: item.arguments,
|
||||
});
|
||||
const result = await executeTool(item.name, item.arguments, signal);
|
||||
await eventReceiver?.on({
|
||||
type: "tool_result",
|
||||
toolCallId: item.call_id || "",
|
||||
result,
|
||||
isError: false,
|
||||
});
|
||||
|
||||
// Add tool result to messages
|
||||
const toolResultMsg = {
|
||||
type: "function_call_output",
|
||||
call_id: item.call_id,
|
||||
output: result,
|
||||
} as ResponseFunctionToolCallOutputItem;
|
||||
messages.push(toolResultMsg);
|
||||
} catch (e: any) {
|
||||
await eventReceiver?.on({
|
||||
type: "tool_result",
|
||||
toolCallId: item.call_id || "",
|
||||
result: e.message,
|
||||
isError: true,
|
||||
});
|
||||
const errorMsg = {
|
||||
type: "function_call_output",
|
||||
call_id: item.id,
|
||||
output: e.message,
|
||||
isError: true,
|
||||
};
|
||||
messages.push(errorMsg);
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
default: {
|
||||
eventReceiver?.on({ type: "error", message: `Unknown output type in LLM response: ${item.type}` });
|
||||
break;
|
||||
}
|
||||
return { ...rest, content } as Message;
|
||||
}
|
||||
}
|
||||
}
|
||||
return m as Message;
|
||||
});
|
||||
}
|
||||
|
||||
export async function callModelChatCompletionsApi(
|
||||
client: OpenAI,
|
||||
model: string,
|
||||
messages: any[],
|
||||
signal?: AbortSignal,
|
||||
eventReceiver?: AgentEventReceiver,
|
||||
supportsReasoning?: boolean,
|
||||
baseURL?: string,
|
||||
): Promise<void> {
|
||||
let assistantResponded = false;
|
||||
|
||||
while (!assistantResponded) {
|
||||
if (signal?.aborted) {
|
||||
throw new Error("Interrupted");
|
||||
}
|
||||
|
||||
// Build request options
|
||||
let requestOptions: any = {
|
||||
model,
|
||||
messages,
|
||||
tools: toolsForChat,
|
||||
tool_choice: "auto",
|
||||
max_completion_tokens: 2000, // TODO make configurable
|
||||
...(supportsReasoning && {
|
||||
reasoning_effort: "low", // Use low effort for completions API
|
||||
}),
|
||||
};
|
||||
|
||||
// Apply provider-specific adjustments
|
||||
requestOptions = adjustRequestForProvider(requestOptions, "completions", baseURL, supportsReasoning);
|
||||
|
||||
const response = await client.chat.completions.create(requestOptions, { signal });
|
||||
|
||||
const message = response.choices[0].message;
|
||||
|
||||
// Report token usage if available
|
||||
if (response.usage) {
|
||||
const usage = response.usage;
|
||||
await eventReceiver?.on({
|
||||
type: "token_usage",
|
||||
inputTokens: usage.prompt_tokens || 0,
|
||||
outputTokens: usage.completion_tokens || 0,
|
||||
totalTokens: usage.total_tokens || 0,
|
||||
cacheReadTokens: usage.prompt_tokens_details?.cached_tokens || 0,
|
||||
cacheWriteTokens: 0, // Not available in API
|
||||
reasoningTokens: usage.completion_tokens_details?.reasoning_tokens || 0,
|
||||
});
|
||||
}
|
||||
|
||||
if (message.tool_calls && message.tool_calls.length > 0) {
|
||||
// Add assistant message with tool calls to history
|
||||
const assistantMsg: any = {
|
||||
role: "assistant",
|
||||
content: message.content || null,
|
||||
tool_calls: message.tool_calls,
|
||||
};
|
||||
messages.push(assistantMsg);
|
||||
|
||||
// Display and execute each tool call
|
||||
for (const toolCall of message.tool_calls) {
|
||||
// Check if interrupted before executing tool
|
||||
if (signal?.aborted) {
|
||||
throw new Error("Interrupted");
|
||||
}
|
||||
|
||||
try {
|
||||
const funcName = toolCall.type === "function" ? toolCall.function.name : toolCall.custom.name;
|
||||
const funcArgs = toolCall.type === "function" ? toolCall.function.arguments : toolCall.custom.input;
|
||||
|
||||
await eventReceiver?.on({ type: "tool_call", toolCallId: toolCall.id, name: funcName, args: funcArgs });
|
||||
const result = await executeTool(funcName, funcArgs, signal);
|
||||
await eventReceiver?.on({ type: "tool_result", toolCallId: toolCall.id, result, isError: false });
|
||||
|
||||
// Add tool result to messages
|
||||
const toolMsg = {
|
||||
role: "tool",
|
||||
tool_call_id: toolCall.id,
|
||||
content: result,
|
||||
};
|
||||
messages.push(toolMsg);
|
||||
} catch (e: any) {
|
||||
eventReceiver?.on({ type: "tool_result", toolCallId: toolCall.id, result: e.message, isError: true });
|
||||
const errorMsg = {
|
||||
role: "tool",
|
||||
tool_call_id: toolCall.id,
|
||||
content: e.message,
|
||||
};
|
||||
messages.push(errorMsg);
|
||||
}
|
||||
}
|
||||
} else if (message.content) {
|
||||
// Parse provider-specific reasoning from message
|
||||
const { cleanContent, reasoningTexts } = parseReasoningFromMessage(message, baseURL);
|
||||
|
||||
// Emit reasoning events if any
|
||||
for (const reasoning of reasoningTexts) {
|
||||
await eventReceiver?.on({ type: "reasoning", text: reasoning });
|
||||
}
|
||||
|
||||
// Emit the cleaned assistant message
|
||||
await eventReceiver?.on({ type: "assistant_message", text: cleanContent });
|
||||
const finalMsg = { role: "assistant", content: cleanContent };
|
||||
messages.push(finalMsg);
|
||||
assistantResponded = true;
|
||||
}
|
||||
}
|
||||
export interface AgentOptions {
|
||||
initialState?: Partial<AgentState>;
|
||||
transport: AgentTransport;
|
||||
// Transform app messages to LLM-compatible messages before sending to transport
|
||||
messageTransformer?: (messages: AppMessage[]) => Message[] | Promise<Message[]>;
|
||||
}
|
||||
|
||||
export class Agent {
|
||||
private client: OpenAI;
|
||||
public readonly config: AgentConfig;
|
||||
private messages: any[] = [];
|
||||
private renderer?: AgentEventReceiver;
|
||||
private sessionManager?: SessionManager;
|
||||
private comboReceiver: AgentEventReceiver;
|
||||
private abortController: AbortController | null = null;
|
||||
private supportsReasoning: boolean | null = null;
|
||||
private _state: AgentState = {
|
||||
systemPrompt: "",
|
||||
model: getModel("google", "gemini-2.5-flash-lite-preview-06-17"),
|
||||
thinkingLevel: "off",
|
||||
tools: [],
|
||||
messages: [],
|
||||
isStreaming: false,
|
||||
streamMessage: null,
|
||||
pendingToolCalls: new Set<string>(),
|
||||
error: undefined,
|
||||
};
|
||||
private listeners = new Set<(e: AgentEvent) => void>();
|
||||
private abortController?: AbortController;
|
||||
private transport: AgentTransport;
|
||||
private messageTransformer: (messages: AppMessage[]) => Message[] | Promise<Message[]>;
|
||||
private messageQueue: Array<QueuedMessage<AppMessage>> = [];
|
||||
|
||||
constructor(config: AgentConfig, renderer?: AgentEventReceiver, sessionManager?: SessionManager) {
|
||||
this.config = config;
|
||||
this.client = new OpenAI({
|
||||
apiKey: config.apiKey,
|
||||
baseURL: config.baseURL,
|
||||
constructor(opts: AgentOptions) {
|
||||
this._state = { ...this._state, ...opts.initialState };
|
||||
this.transport = opts.transport;
|
||||
this.messageTransformer = opts.messageTransformer || defaultMessageTransformer;
|
||||
}
|
||||
|
||||
get state(): AgentState {
|
||||
return this._state;
|
||||
}
|
||||
|
||||
subscribe(fn: (e: AgentEvent) => void): () => void {
|
||||
this.listeners.add(fn);
|
||||
fn({ type: "state-update", state: this._state });
|
||||
return () => this.listeners.delete(fn);
|
||||
}
|
||||
|
||||
// State mutators
|
||||
setSystemPrompt(v: string) {
|
||||
this.patch({ systemPrompt: v });
|
||||
}
|
||||
|
||||
setModel(m: typeof this._state.model) {
|
||||
this.patch({ model: m });
|
||||
}
|
||||
|
||||
setThinkingLevel(l: ThinkingLevel) {
|
||||
this.patch({ thinkingLevel: l });
|
||||
}
|
||||
|
||||
setTools(t: typeof this._state.tools) {
|
||||
this.patch({ tools: t });
|
||||
}
|
||||
|
||||
replaceMessages(ms: AppMessage[]) {
|
||||
this.patch({ messages: ms.slice() });
|
||||
}
|
||||
|
||||
appendMessage(m: AppMessage) {
|
||||
this.patch({ messages: [...this._state.messages, m] });
|
||||
}
|
||||
|
||||
async queueMessage(m: AppMessage) {
|
||||
// Transform message and queue it for injection at next turn
|
||||
const transformed = await this.messageTransformer([m]);
|
||||
this.messageQueue.push({
|
||||
original: m,
|
||||
llm: transformed[0], // undefined if filtered out
|
||||
});
|
||||
|
||||
// Use provided renderer or default to console
|
||||
this.renderer = renderer;
|
||||
this.sessionManager = sessionManager;
|
||||
|
||||
this.comboReceiver = {
|
||||
on: async (event: AgentEvent): Promise<void> => {
|
||||
await this.renderer?.on(event);
|
||||
await this.sessionManager?.on(event);
|
||||
},
|
||||
};
|
||||
|
||||
// Initialize with system prompt if provided
|
||||
if (config.systemPrompt) {
|
||||
this.messages.push({
|
||||
role: "developer",
|
||||
content: config.systemPrompt,
|
||||
});
|
||||
}
|
||||
|
||||
// Start session logging if we have a session manager
|
||||
if (sessionManager) {
|
||||
sessionManager.startSession(this.config);
|
||||
|
||||
// Emit session_start event
|
||||
this.comboReceiver.on({
|
||||
type: "session_start",
|
||||
sessionId: sessionManager.getSessionId(),
|
||||
model: config.model,
|
||||
api: config.api,
|
||||
baseURL: config.baseURL,
|
||||
systemPrompt: config.systemPrompt,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
async ask(userMessage: string): Promise<void> {
|
||||
// Render user message through the event system
|
||||
this.comboReceiver.on({ type: "user_message", text: userMessage });
|
||||
|
||||
// Add user message
|
||||
const userMsg = { role: "user", content: userMessage };
|
||||
this.messages.push(userMsg);
|
||||
|
||||
// Create a new AbortController for this chat session
|
||||
this.abortController = new AbortController();
|
||||
|
||||
try {
|
||||
await this.comboReceiver.on({ type: "assistant_start" });
|
||||
|
||||
// Check reasoning support only once per agent instance
|
||||
if (this.supportsReasoning === null) {
|
||||
this.supportsReasoning = await checkReasoningSupport(
|
||||
this.client,
|
||||
this.config.model,
|
||||
this.config.api,
|
||||
this.config.baseURL,
|
||||
this.abortController.signal,
|
||||
);
|
||||
}
|
||||
|
||||
if (this.config.api === "responses") {
|
||||
await callModelResponsesApi(
|
||||
this.client,
|
||||
this.config.model,
|
||||
this.messages,
|
||||
this.abortController.signal,
|
||||
this.comboReceiver,
|
||||
this.supportsReasoning,
|
||||
this.config.baseURL,
|
||||
);
|
||||
} else {
|
||||
await callModelChatCompletionsApi(
|
||||
this.client,
|
||||
this.config.model,
|
||||
this.messages,
|
||||
this.abortController.signal,
|
||||
this.comboReceiver,
|
||||
this.supportsReasoning,
|
||||
this.config.baseURL,
|
||||
);
|
||||
}
|
||||
} catch (e) {
|
||||
// Check if this was an interruption by checking the abort signal
|
||||
if (this.abortController.signal.aborted) {
|
||||
// Emit interrupted event so UI can clean up properly
|
||||
await this.comboReceiver?.on({ type: "interrupted" });
|
||||
return;
|
||||
}
|
||||
throw e;
|
||||
} finally {
|
||||
this.abortController = null;
|
||||
}
|
||||
clearMessages() {
|
||||
this.patch({ messages: [] });
|
||||
}
|
||||
|
||||
interrupt(): void {
|
||||
abort() {
|
||||
this.abortController?.abort();
|
||||
}
|
||||
|
||||
setEvents(events: AgentEvent[]): void {
|
||||
// Reconstruct messages from events based on API type
|
||||
this.messages = [];
|
||||
async prompt(input: string, attachments?: Attachment[]) {
|
||||
const model = this._state.model;
|
||||
if (!model) {
|
||||
throw new Error("No model configured");
|
||||
}
|
||||
|
||||
if (this.config.api === "responses") {
|
||||
// Responses API format
|
||||
if (this.config.systemPrompt) {
|
||||
this.messages.push({
|
||||
role: "developer",
|
||||
content: this.config.systemPrompt,
|
||||
});
|
||||
}
|
||||
|
||||
for (const event of events) {
|
||||
switch (event.type) {
|
||||
case "user_message":
|
||||
this.messages.push({
|
||||
role: "user",
|
||||
content: [{ type: "input_text", text: event.text }],
|
||||
});
|
||||
break;
|
||||
|
||||
case "reasoning":
|
||||
// Add reasoning message
|
||||
this.messages.push({
|
||||
type: "reasoning",
|
||||
content: [{ type: "reasoning_text", text: event.text }],
|
||||
});
|
||||
break;
|
||||
|
||||
case "tool_call":
|
||||
// Add function call
|
||||
this.messages.push({
|
||||
type: "function_call",
|
||||
id: event.toolCallId,
|
||||
name: event.name,
|
||||
arguments: event.args,
|
||||
});
|
||||
break;
|
||||
|
||||
case "tool_result":
|
||||
// Add function result
|
||||
this.messages.push({
|
||||
type: "function_call_output",
|
||||
call_id: event.toolCallId,
|
||||
output: event.result,
|
||||
});
|
||||
break;
|
||||
|
||||
case "assistant_message":
|
||||
// Add final message
|
||||
this.messages.push({
|
||||
type: "message",
|
||||
content: [{ type: "output_text", text: event.text }],
|
||||
});
|
||||
break;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Chat Completions API format
|
||||
if (this.config.systemPrompt) {
|
||||
this.messages.push({ role: "system", content: this.config.systemPrompt });
|
||||
}
|
||||
|
||||
// Track tool calls in progress
|
||||
let pendingToolCalls: any[] = [];
|
||||
|
||||
for (const event of events) {
|
||||
switch (event.type) {
|
||||
case "user_message":
|
||||
this.messages.push({ role: "user", content: event.text });
|
||||
break;
|
||||
|
||||
case "assistant_start":
|
||||
// Reset pending tool calls for new assistant response
|
||||
pendingToolCalls = [];
|
||||
break;
|
||||
|
||||
case "tool_call":
|
||||
// Accumulate tool calls
|
||||
pendingToolCalls.push({
|
||||
id: event.toolCallId,
|
||||
type: "function",
|
||||
function: {
|
||||
name: event.name,
|
||||
arguments: event.args,
|
||||
},
|
||||
});
|
||||
break;
|
||||
|
||||
case "tool_result":
|
||||
// When we see the first tool result, add the assistant message with all tool calls
|
||||
if (pendingToolCalls.length > 0) {
|
||||
this.messages.push({
|
||||
role: "assistant",
|
||||
content: null,
|
||||
tool_calls: pendingToolCalls,
|
||||
});
|
||||
pendingToolCalls = [];
|
||||
}
|
||||
// Add the tool result
|
||||
this.messages.push({
|
||||
role: "tool",
|
||||
tool_call_id: event.toolCallId,
|
||||
content: event.result,
|
||||
});
|
||||
break;
|
||||
|
||||
case "assistant_message":
|
||||
// Final assistant response (no tool calls)
|
||||
this.messages.push({ role: "assistant", content: event.text });
|
||||
break;
|
||||
|
||||
// Skip other event types (thinking, error, interrupted, token_usage)
|
||||
// Build user message with attachments
|
||||
const content: Array<TextContent | ImageContent> = [{ type: "text", text: input }];
|
||||
if (attachments?.length) {
|
||||
for (const a of attachments) {
|
||||
if (a.type === "image") {
|
||||
content.push({ type: "image", data: a.content, mimeType: a.mimeType });
|
||||
} else if (a.type === "document" && a.extractedText) {
|
||||
content.push({
|
||||
type: "text",
|
||||
text: `\n\n[Document: ${a.fileName}]\n${a.extractedText}`,
|
||||
isDocument: true,
|
||||
} as TextContent);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const userMessage: AppMessage = {
|
||||
role: "user",
|
||||
content,
|
||||
attachments: attachments?.length ? attachments : undefined,
|
||||
};
|
||||
|
||||
this.abortController = new AbortController();
|
||||
this.patch({ isStreaming: true, streamMessage: null, error: undefined });
|
||||
this.emit({ type: "started" });
|
||||
|
||||
const reasoning =
|
||||
this._state.thinkingLevel === "off"
|
||||
? undefined
|
||||
: this._state.thinkingLevel === "minimal"
|
||||
? "low"
|
||||
: this._state.thinkingLevel;
|
||||
|
||||
const cfg = {
|
||||
systemPrompt: this._state.systemPrompt,
|
||||
tools: this._state.tools,
|
||||
model,
|
||||
reasoning,
|
||||
getQueuedMessages: async <T>() => {
|
||||
// Return queued messages (they'll be added to state via message_end event)
|
||||
const queued = this.messageQueue.slice();
|
||||
this.messageQueue = [];
|
||||
return queued as QueuedMessage<T>[];
|
||||
},
|
||||
};
|
||||
|
||||
try {
|
||||
let partial: Message | null = null;
|
||||
|
||||
// Transform app messages to LLM-compatible messages (initial set)
|
||||
const llmMessages = await this.messageTransformer(this._state.messages);
|
||||
|
||||
for await (const ev of this.transport.run(
|
||||
llmMessages,
|
||||
userMessage as Message,
|
||||
cfg,
|
||||
this.abortController.signal,
|
||||
)) {
|
||||
switch (ev.type) {
|
||||
case "message_start":
|
||||
case "message_update": {
|
||||
partial = ev.message;
|
||||
this.patch({ streamMessage: ev.message });
|
||||
break;
|
||||
}
|
||||
case "message_end": {
|
||||
partial = null;
|
||||
this.appendMessage(ev.message as AppMessage);
|
||||
this.patch({ streamMessage: null });
|
||||
break;
|
||||
}
|
||||
case "tool_execution_start": {
|
||||
const s = new Set(this._state.pendingToolCalls);
|
||||
s.add(ev.toolCallId);
|
||||
this.patch({ pendingToolCalls: s });
|
||||
break;
|
||||
}
|
||||
case "tool_execution_end": {
|
||||
const s = new Set(this._state.pendingToolCalls);
|
||||
s.delete(ev.toolCallId);
|
||||
this.patch({ pendingToolCalls: s });
|
||||
break;
|
||||
}
|
||||
case "agent_end": {
|
||||
this.patch({ streamMessage: null });
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (partial && partial.role === "assistant" && partial.content.length > 0) {
|
||||
const onlyEmpty = !partial.content.some(
|
||||
(c) =>
|
||||
(c.type === "thinking" && c.thinking.trim().length > 0) ||
|
||||
(c.type === "text" && c.text.trim().length > 0) ||
|
||||
(c.type === "toolCall" && c.name.trim().length > 0),
|
||||
);
|
||||
if (!onlyEmpty) {
|
||||
this.appendMessage(partial as AppMessage);
|
||||
} else {
|
||||
if (this.abortController?.signal.aborted) {
|
||||
throw new Error("Request was aborted");
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (err: any) {
|
||||
const msg: Message = {
|
||||
role: "assistant",
|
||||
content: [{ type: "text", text: "" }],
|
||||
api: model.api,
|
||||
provider: model.provider,
|
||||
model: model.id,
|
||||
usage: {
|
||||
input: 0,
|
||||
output: 0,
|
||||
cacheRead: 0,
|
||||
cacheWrite: 0,
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
|
||||
},
|
||||
stopReason: this.abortController?.signal.aborted ? "aborted" : "error",
|
||||
errorMessage: err?.message || String(err),
|
||||
};
|
||||
this.appendMessage(msg as AppMessage);
|
||||
this.patch({ error: err?.message || String(err) });
|
||||
} finally {
|
||||
this.patch({ isStreaming: false, streamMessage: null, pendingToolCalls: new Set<string>() });
|
||||
this.abortController = undefined;
|
||||
this.emit({ type: "completed" });
|
||||
}
|
||||
}
|
||||
|
||||
private patch(p: Partial<AgentState>): void {
|
||||
this._state = { ...this._state, ...p };
|
||||
this.emit({ type: "state-update", state: this._state });
|
||||
}
|
||||
|
||||
private emit(e: AgentEvent) {
|
||||
for (const listener of this.listeners) {
|
||||
listener(e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,15 +1,22 @@
|
|||
// Main exports for pi-agent package
|
||||
|
||||
export type { AgentConfig, AgentEvent, AgentEventReceiver } from "./agent.js";
|
||||
export { Agent } from "./agent.js";
|
||||
export type { ArgDef, ArgDefs, ParsedArgs } from "./args.js";
|
||||
// CLI utilities
|
||||
export { parseArgs, printHelp } from "./args.js";
|
||||
// CLI main function
|
||||
export { main } from "./main.js";
|
||||
// Renderers
|
||||
export { ConsoleRenderer } from "./renderers/console-renderer.js";
|
||||
export { JsonRenderer } from "./renderers/json-renderer.js";
|
||||
export { TuiRenderer } from "./renderers/tui-renderer.js";
|
||||
export type { SessionData, SessionEvent, SessionHeader } from "./session-manager.js";
|
||||
export { SessionManager } from "./session-manager.js";
|
||||
// Core Agent
|
||||
export { Agent, type AgentOptions } from "./agent.js";
|
||||
// Transports
|
||||
export {
|
||||
type AgentRunConfig,
|
||||
type AgentTransport,
|
||||
AppTransport,
|
||||
type AppTransportOptions,
|
||||
ProviderTransport,
|
||||
type ProviderTransportOptions,
|
||||
type ProxyAssistantMessageEvent,
|
||||
} from "./transports/index.js";
|
||||
// Types
|
||||
export type {
|
||||
AgentEvent,
|
||||
AgentState,
|
||||
AppMessage,
|
||||
Attachment,
|
||||
CustomMessages,
|
||||
ThinkingLevel,
|
||||
UserMessageWithAttachments,
|
||||
} from "./types.js";
|
||||
|
|
|
|||
374
packages/agent/src/transports/AppTransport.ts
Normal file
374
packages/agent/src/transports/AppTransport.ts
Normal file
|
|
@ -0,0 +1,374 @@
|
|||
import type {
|
||||
AgentContext,
|
||||
AgentLoopConfig,
|
||||
Api,
|
||||
AssistantMessage,
|
||||
AssistantMessageEvent,
|
||||
Context,
|
||||
Message,
|
||||
Model,
|
||||
SimpleStreamOptions,
|
||||
ToolCall,
|
||||
UserMessage,
|
||||
} from "@mariozechner/pi-ai";
|
||||
import { agentLoop } from "@mariozechner/pi-ai";
|
||||
import { AssistantMessageEventStream } from "@mariozechner/pi-ai/dist/utils/event-stream.js";
|
||||
import { parseStreamingJson } from "@mariozechner/pi-ai/dist/utils/json-parse.js";
|
||||
import type { ProxyAssistantMessageEvent } from "./proxy-types.js";
|
||||
import type { AgentRunConfig, AgentTransport } from "./types.js";
|
||||
|
||||
/**
|
||||
* Stream function that proxies through a server instead of calling providers directly.
|
||||
* The server strips the partial field from delta events to reduce bandwidth.
|
||||
* We reconstruct the partial message client-side.
|
||||
*/
|
||||
function streamSimpleProxy(
|
||||
model: Model<any>,
|
||||
context: Context,
|
||||
options: SimpleStreamOptions & { authToken: string },
|
||||
proxyUrl: string,
|
||||
): AssistantMessageEventStream {
|
||||
const stream = new AssistantMessageEventStream();
|
||||
|
||||
(async () => {
|
||||
// Initialize the partial message that we'll build up from events
|
||||
const partial: AssistantMessage = {
|
||||
role: "assistant",
|
||||
stopReason: "stop",
|
||||
content: [],
|
||||
api: model.api,
|
||||
provider: model.provider,
|
||||
model: model.id,
|
||||
usage: {
|
||||
input: 0,
|
||||
output: 0,
|
||||
cacheRead: 0,
|
||||
cacheWrite: 0,
|
||||
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
|
||||
},
|
||||
};
|
||||
|
||||
let reader: ReadableStreamDefaultReader<Uint8Array> | undefined;
|
||||
|
||||
// Set up abort handler to cancel the reader
|
||||
const abortHandler = () => {
|
||||
if (reader) {
|
||||
reader.cancel("Request aborted by user").catch(() => {});
|
||||
}
|
||||
};
|
||||
|
||||
if (options.signal) {
|
||||
options.signal.addEventListener("abort", abortHandler);
|
||||
}
|
||||
|
||||
try {
|
||||
const response = await fetch(`${proxyUrl}/api/stream`, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
Authorization: `Bearer ${options.authToken}`,
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
body: JSON.stringify({
|
||||
model,
|
||||
context,
|
||||
options: {
|
||||
temperature: options.temperature,
|
||||
maxTokens: options.maxTokens,
|
||||
reasoning: options.reasoning,
|
||||
// Don't send apiKey or signal - those are added server-side
|
||||
},
|
||||
}),
|
||||
signal: options.signal,
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
let errorMessage = `Proxy error: ${response.status} ${response.statusText}`;
|
||||
try {
|
||||
const errorData = (await response.json()) as { error?: string };
|
||||
if (errorData.error) {
|
||||
errorMessage = `Proxy error: ${errorData.error}`;
|
||||
}
|
||||
} catch {
|
||||
// Couldn't parse error response, use default message
|
||||
}
|
||||
throw new Error(errorMessage);
|
||||
}
|
||||
|
||||
// Parse SSE stream
|
||||
reader = response.body!.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
let buffer = "";
|
||||
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
|
||||
// Check if aborted after reading
|
||||
if (options.signal?.aborted) {
|
||||
throw new Error("Request aborted by user");
|
||||
}
|
||||
|
||||
buffer += decoder.decode(value, { stream: true });
|
||||
const lines = buffer.split("\n");
|
||||
buffer = lines.pop() || "";
|
||||
|
||||
for (const line of lines) {
|
||||
if (line.startsWith("data: ")) {
|
||||
const data = line.slice(6).trim();
|
||||
if (data) {
|
||||
const proxyEvent = JSON.parse(data) as ProxyAssistantMessageEvent;
|
||||
let event: AssistantMessageEvent | undefined;
|
||||
|
||||
// Handle different event types
|
||||
// Server sends events with partial for non-delta events,
|
||||
// and without partial for delta events
|
||||
switch (proxyEvent.type) {
|
||||
case "start":
|
||||
event = { type: "start", partial };
|
||||
break;
|
||||
|
||||
case "text_start":
|
||||
partial.content[proxyEvent.contentIndex] = {
|
||||
type: "text",
|
||||
text: "",
|
||||
};
|
||||
event = { type: "text_start", contentIndex: proxyEvent.contentIndex, partial };
|
||||
break;
|
||||
|
||||
case "text_delta": {
|
||||
const content = partial.content[proxyEvent.contentIndex];
|
||||
if (content?.type === "text") {
|
||||
content.text += proxyEvent.delta;
|
||||
event = {
|
||||
type: "text_delta",
|
||||
contentIndex: proxyEvent.contentIndex,
|
||||
delta: proxyEvent.delta,
|
||||
partial,
|
||||
};
|
||||
} else {
|
||||
throw new Error("Received text_delta for non-text content");
|
||||
}
|
||||
break;
|
||||
}
|
||||
case "text_end": {
|
||||
const content = partial.content[proxyEvent.contentIndex];
|
||||
if (content?.type === "text") {
|
||||
content.textSignature = proxyEvent.contentSignature;
|
||||
event = {
|
||||
type: "text_end",
|
||||
contentIndex: proxyEvent.contentIndex,
|
||||
content: content.text,
|
||||
partial,
|
||||
};
|
||||
} else {
|
||||
throw new Error("Received text_end for non-text content");
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case "thinking_start":
|
||||
partial.content[proxyEvent.contentIndex] = {
|
||||
type: "thinking",
|
||||
thinking: "",
|
||||
};
|
||||
event = { type: "thinking_start", contentIndex: proxyEvent.contentIndex, partial };
|
||||
break;
|
||||
|
||||
case "thinking_delta": {
|
||||
const content = partial.content[proxyEvent.contentIndex];
|
||||
if (content?.type === "thinking") {
|
||||
content.thinking += proxyEvent.delta;
|
||||
event = {
|
||||
type: "thinking_delta",
|
||||
contentIndex: proxyEvent.contentIndex,
|
||||
delta: proxyEvent.delta,
|
||||
partial,
|
||||
};
|
||||
} else {
|
||||
throw new Error("Received thinking_delta for non-thinking content");
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case "thinking_end": {
|
||||
const content = partial.content[proxyEvent.contentIndex];
|
||||
if (content?.type === "thinking") {
|
||||
content.thinkingSignature = proxyEvent.contentSignature;
|
||||
event = {
|
||||
type: "thinking_end",
|
||||
contentIndex: proxyEvent.contentIndex,
|
||||
content: content.thinking,
|
||||
partial,
|
||||
};
|
||||
} else {
|
||||
throw new Error("Received thinking_end for non-thinking content");
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case "toolcall_start":
|
||||
partial.content[proxyEvent.contentIndex] = {
|
||||
type: "toolCall",
|
||||
id: proxyEvent.id,
|
||||
name: proxyEvent.toolName,
|
||||
arguments: {},
|
||||
partialJson: "",
|
||||
} satisfies ToolCall & { partialJson: string } as ToolCall;
|
||||
event = { type: "toolcall_start", contentIndex: proxyEvent.contentIndex, partial };
|
||||
break;
|
||||
|
||||
case "toolcall_delta": {
|
||||
const content = partial.content[proxyEvent.contentIndex];
|
||||
if (content?.type === "toolCall") {
|
||||
(content as any).partialJson += proxyEvent.delta;
|
||||
content.arguments = parseStreamingJson((content as any).partialJson) || {};
|
||||
event = {
|
||||
type: "toolcall_delta",
|
||||
contentIndex: proxyEvent.contentIndex,
|
||||
delta: proxyEvent.delta,
|
||||
partial,
|
||||
};
|
||||
partial.content[proxyEvent.contentIndex] = { ...content }; // Trigger reactivity
|
||||
} else {
|
||||
throw new Error("Received toolcall_delta for non-toolCall content");
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case "toolcall_end": {
|
||||
const content = partial.content[proxyEvent.contentIndex];
|
||||
if (content?.type === "toolCall") {
|
||||
delete (content as any).partialJson;
|
||||
event = {
|
||||
type: "toolcall_end",
|
||||
contentIndex: proxyEvent.contentIndex,
|
||||
toolCall: content,
|
||||
partial,
|
||||
};
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
case "done":
|
||||
partial.stopReason = proxyEvent.reason;
|
||||
partial.usage = proxyEvent.usage;
|
||||
event = { type: "done", reason: proxyEvent.reason, message: partial };
|
||||
break;
|
||||
|
||||
case "error":
|
||||
partial.stopReason = proxyEvent.reason;
|
||||
partial.errorMessage = proxyEvent.errorMessage;
|
||||
partial.usage = proxyEvent.usage;
|
||||
event = { type: "error", reason: proxyEvent.reason, error: partial };
|
||||
break;
|
||||
|
||||
default: {
|
||||
// Exhaustive check
|
||||
const _exhaustiveCheck: never = proxyEvent;
|
||||
console.warn(`Unhandled event type: ${(proxyEvent as any).type}`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Push the event to stream
|
||||
if (event) {
|
||||
stream.push(event);
|
||||
} else {
|
||||
throw new Error("Failed to create event from proxy event");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check if aborted after reading
|
||||
if (options.signal?.aborted) {
|
||||
throw new Error("Request aborted by user");
|
||||
}
|
||||
|
||||
stream.end();
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
partial.stopReason = options.signal?.aborted ? "aborted" : "error";
|
||||
partial.errorMessage = errorMessage;
|
||||
stream.push({
|
||||
type: "error",
|
||||
reason: partial.stopReason,
|
||||
error: partial,
|
||||
} satisfies AssistantMessageEvent);
|
||||
stream.end();
|
||||
} finally {
|
||||
// Clean up abort handler
|
||||
if (options.signal) {
|
||||
options.signal.removeEventListener("abort", abortHandler);
|
||||
}
|
||||
}
|
||||
})();
|
||||
|
||||
return stream;
|
||||
}
|
||||
|
||||
export interface AppTransportOptions {
|
||||
/**
|
||||
* Proxy server URL. The server manages user accounts and proxies requests to LLM providers.
|
||||
* Example: "https://genai.mariozechner.at"
|
||||
*/
|
||||
proxyUrl: string;
|
||||
|
||||
/**
|
||||
* Function to retrieve auth token for the proxy server.
|
||||
* The token is used for user authentication and authorization.
|
||||
*/
|
||||
getAuthToken: () => Promise<string> | string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Transport that uses an app server with user authentication tokens.
|
||||
* The server manages user accounts and proxies requests to LLM providers.
|
||||
*/
|
||||
export class AppTransport implements AgentTransport {
|
||||
private options: AppTransportOptions;
|
||||
|
||||
constructor(options: AppTransportOptions) {
|
||||
this.options = options;
|
||||
}
|
||||
|
||||
async *run(messages: Message[], userMessage: Message, cfg: AgentRunConfig, signal?: AbortSignal) {
|
||||
const authToken = await this.options.getAuthToken();
|
||||
if (!authToken) {
|
||||
throw new Error("Auth token is required for AppTransport");
|
||||
}
|
||||
|
||||
// Use proxy - no local API key needed
|
||||
const streamFn = <TApi extends Api>(model: Model<TApi>, context: Context, options?: SimpleStreamOptions) => {
|
||||
return streamSimpleProxy(
|
||||
model,
|
||||
context,
|
||||
{
|
||||
...options,
|
||||
authToken,
|
||||
},
|
||||
this.options.proxyUrl,
|
||||
);
|
||||
};
|
||||
|
||||
// Messages are already LLM-compatible (filtered by Agent)
|
||||
const context: AgentContext = {
|
||||
systemPrompt: cfg.systemPrompt,
|
||||
messages,
|
||||
tools: cfg.tools,
|
||||
};
|
||||
|
||||
const pc: AgentLoopConfig = {
|
||||
model: cfg.model,
|
||||
reasoning: cfg.reasoning,
|
||||
getQueuedMessages: cfg.getQueuedMessages,
|
||||
};
|
||||
|
||||
// Yield events from the upstream agentLoop iterator
|
||||
// Pass streamFn as the 5th parameter to use proxy
|
||||
for await (const ev of agentLoop(userMessage as unknown as UserMessage, context, pc, signal, streamFn as any)) {
|
||||
yield ev;
|
||||
}
|
||||
}
|
||||
}
|
||||
75
packages/agent/src/transports/ProviderTransport.ts
Normal file
75
packages/agent/src/transports/ProviderTransport.ts
Normal file
|
|
@ -0,0 +1,75 @@
|
|||
import {
|
||||
type AgentContext,
|
||||
type AgentLoopConfig,
|
||||
agentLoop,
|
||||
type Message,
|
||||
type UserMessage,
|
||||
} from "@mariozechner/pi-ai";
|
||||
import type { AgentRunConfig, AgentTransport } from "./types.js";
|
||||
|
||||
export interface ProviderTransportOptions {
|
||||
/**
|
||||
* Function to retrieve API key for a given provider.
|
||||
* If not provided, transport will try to use environment variables.
|
||||
*/
|
||||
getApiKey?: (provider: string) => Promise<string | undefined> | string | undefined;
|
||||
|
||||
/**
|
||||
* Optional CORS proxy URL for browser environments.
|
||||
* If provided, all requests will be routed through this proxy.
|
||||
* Format: "https://proxy.example.com"
|
||||
*/
|
||||
corsProxyUrl?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Transport that calls LLM providers directly.
|
||||
* Optionally routes calls through a CORS proxy if configured.
|
||||
*/
|
||||
export class ProviderTransport implements AgentTransport {
|
||||
private options: ProviderTransportOptions;
|
||||
|
||||
constructor(options: ProviderTransportOptions = {}) {
|
||||
this.options = options;
|
||||
}
|
||||
|
||||
async *run(messages: Message[], userMessage: Message, cfg: AgentRunConfig, signal?: AbortSignal) {
|
||||
// Get API key
|
||||
let apiKey: string | undefined;
|
||||
if (this.options.getApiKey) {
|
||||
apiKey = await this.options.getApiKey(cfg.model.provider);
|
||||
}
|
||||
|
||||
if (!apiKey) {
|
||||
throw new Error(`No API key found for provider: ${cfg.model.provider}`);
|
||||
}
|
||||
|
||||
// Clone model and modify baseUrl if CORS proxy is enabled
|
||||
let model = cfg.model;
|
||||
if (this.options.corsProxyUrl && cfg.model.baseUrl) {
|
||||
model = {
|
||||
...cfg.model,
|
||||
baseUrl: `${this.options.corsProxyUrl}/?url=${encodeURIComponent(cfg.model.baseUrl)}`,
|
||||
};
|
||||
}
|
||||
|
||||
// Messages are already LLM-compatible (filtered by Agent)
|
||||
const context: AgentContext = {
|
||||
systemPrompt: cfg.systemPrompt,
|
||||
messages,
|
||||
tools: cfg.tools,
|
||||
};
|
||||
|
||||
const pc: AgentLoopConfig = {
|
||||
model,
|
||||
reasoning: cfg.reasoning,
|
||||
apiKey,
|
||||
getQueuedMessages: cfg.getQueuedMessages,
|
||||
};
|
||||
|
||||
// Yield events from agentLoop
|
||||
for await (const ev of agentLoop(userMessage as unknown as UserMessage, context, pc, signal)) {
|
||||
yield ev;
|
||||
}
|
||||
}
|
||||
}
|
||||
4
packages/agent/src/transports/index.ts
Normal file
4
packages/agent/src/transports/index.ts
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
export { AppTransport, type AppTransportOptions } from "./AppTransport.js";
|
||||
export { ProviderTransport, type ProviderTransportOptions } from "./ProviderTransport.js";
|
||||
export type { ProxyAssistantMessageEvent } from "./proxy-types.js";
|
||||
export type { AgentRunConfig, AgentTransport } from "./types.js";
|
||||
20
packages/agent/src/transports/proxy-types.ts
Normal file
20
packages/agent/src/transports/proxy-types.ts
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
import type { StopReason, Usage } from "@mariozechner/pi-ai";
|
||||
|
||||
/**
|
||||
* Event types emitted by the proxy server.
|
||||
* The server strips the `partial` field from delta events to reduce bandwidth.
|
||||
* Clients reconstruct the partial message from these events.
|
||||
*/
|
||||
export type ProxyAssistantMessageEvent =
|
||||
| { type: "start" }
|
||||
| { type: "text_start"; contentIndex: number }
|
||||
| { type: "text_delta"; contentIndex: number; delta: string }
|
||||
| { type: "text_end"; contentIndex: number; contentSignature?: string }
|
||||
| { type: "thinking_start"; contentIndex: number }
|
||||
| { type: "thinking_delta"; contentIndex: number; delta: string }
|
||||
| { type: "thinking_end"; contentIndex: number; contentSignature?: string }
|
||||
| { type: "toolcall_start"; contentIndex: number; id: string; toolName: string }
|
||||
| { type: "toolcall_delta"; contentIndex: number; delta: string }
|
||||
| { type: "toolcall_end"; contentIndex: number }
|
||||
| { type: "done"; reason: Extract<StopReason, "stop" | "length" | "toolUse">; usage: Usage }
|
||||
| { type: "error"; reason: Extract<StopReason, "aborted" | "error">; errorMessage: string; usage: Usage };
|
||||
28
packages/agent/src/transports/types.ts
Normal file
28
packages/agent/src/transports/types.ts
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
import type { AgentEvent, AgentTool, Message, Model, QueuedMessage } from "@mariozechner/pi-ai";
|
||||
|
||||
/**
|
||||
* The minimal configuration needed to run an agent turn.
|
||||
*/
|
||||
export interface AgentRunConfig {
|
||||
systemPrompt: string;
|
||||
tools: AgentTool<any>[];
|
||||
model: Model<any>;
|
||||
reasoning?: "low" | "medium" | "high";
|
||||
getQueuedMessages?: <T>() => Promise<QueuedMessage<T>[]>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Transport interface for executing agent turns.
|
||||
* Transports handle the communication with LLM providers,
|
||||
* abstracting away the details of API calls, proxies, etc.
|
||||
*
|
||||
* Events yielded must match the @mariozechner/pi-ai AgentEvent types.
|
||||
*/
|
||||
export interface AgentTransport {
|
||||
run(
|
||||
messages: Message[],
|
||||
userMessage: Message,
|
||||
config: AgentRunConfig,
|
||||
signal?: AbortSignal,
|
||||
): AsyncIterable<AgentEvent>;
|
||||
}
|
||||
75
packages/agent/src/types.ts
Normal file
75
packages/agent/src/types.ts
Normal file
|
|
@ -0,0 +1,75 @@
|
|||
import type { AgentTool, AssistantMessage, Message, Model, UserMessage } from "@mariozechner/pi-ai";
|
||||
|
||||
/**
|
||||
* Attachment type definition.
|
||||
* Processing is done by consumers (e.g., document extraction in web-ui).
|
||||
*/
|
||||
export interface Attachment {
|
||||
id: string;
|
||||
type: "image" | "document";
|
||||
fileName: string;
|
||||
mimeType: string;
|
||||
size: number;
|
||||
content: string; // base64 encoded (without data URL prefix)
|
||||
extractedText?: string; // For documents
|
||||
preview?: string; // base64 image preview
|
||||
}
|
||||
|
||||
/**
|
||||
* Thinking/reasoning level for models that support it.
|
||||
*/
|
||||
export type ThinkingLevel = "off" | "minimal" | "low" | "medium" | "high";
|
||||
|
||||
/**
|
||||
* User message with optional attachments.
|
||||
*/
|
||||
export type UserMessageWithAttachments = UserMessage & { attachments?: Attachment[] };
|
||||
|
||||
/**
|
||||
* Extensible interface for custom app messages.
|
||||
* Apps can extend via declaration merging:
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* declare module "@mariozechner/agent" {
|
||||
* interface CustomMessages {
|
||||
* artifact: ArtifactMessage;
|
||||
* notification: NotificationMessage;
|
||||
* }
|
||||
* }
|
||||
* ```
|
||||
*/
|
||||
export interface CustomMessages {
|
||||
// Empty by default - apps extend via declaration merging
|
||||
}
|
||||
|
||||
/**
|
||||
* AppMessage: Union of LLM messages + attachments + custom messages.
|
||||
* This abstraction allows apps to add custom message types while maintaining
|
||||
* type safety and compatibility with the base LLM messages.
|
||||
*/
|
||||
export type AppMessage =
|
||||
| AssistantMessage
|
||||
| UserMessageWithAttachments
|
||||
| Message // Includes ToolResultMessage
|
||||
| CustomMessages[keyof CustomMessages];
|
||||
|
||||
/**
|
||||
* Agent state containing all configuration and conversation data.
|
||||
*/
|
||||
export interface AgentState {
|
||||
systemPrompt: string;
|
||||
model: Model<any>;
|
||||
thinkingLevel: ThinkingLevel;
|
||||
tools: AgentTool<any>[];
|
||||
messages: AppMessage[]; // Can include attachments + custom message types
|
||||
isStreaming: boolean;
|
||||
streamMessage: Message | null;
|
||||
pendingToolCalls: Set<string>;
|
||||
error?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Events emitted by the Agent for UI updates.
|
||||
*/
|
||||
export type AgentEvent = { type: "state-update"; state: AgentState } | { type: "started" } | { type: "completed" };
|
||||
140
packages/agent/test/agent.test.ts
Normal file
140
packages/agent/test/agent.test.ts
Normal file
|
|
@ -0,0 +1,140 @@
|
|||
import { getModel } from "@mariozechner/pi-ai";
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { Agent, ProviderTransport } from "../src/index.js";
|
||||
|
||||
describe("Agent", () => {
|
||||
it("should create an agent instance with default state", () => {
|
||||
const agent = new Agent({
|
||||
transport: new ProviderTransport(),
|
||||
});
|
||||
|
||||
expect(agent.state).toBeDefined();
|
||||
expect(agent.state.systemPrompt).toBe("");
|
||||
expect(agent.state.model).toBeDefined();
|
||||
expect(agent.state.thinkingLevel).toBe("off");
|
||||
expect(agent.state.tools).toEqual([]);
|
||||
expect(agent.state.messages).toEqual([]);
|
||||
expect(agent.state.isStreaming).toBe(false);
|
||||
expect(agent.state.streamMessage).toBe(null);
|
||||
expect(agent.state.pendingToolCalls).toEqual(new Set());
|
||||
expect(agent.state.error).toBeUndefined();
|
||||
});
|
||||
|
||||
it("should create an agent instance with custom initial state", () => {
|
||||
const customModel = getModel("openai", "gpt-4o-mini");
|
||||
const agent = new Agent({
|
||||
transport: new ProviderTransport(),
|
||||
initialState: {
|
||||
systemPrompt: "You are a helpful assistant.",
|
||||
model: customModel,
|
||||
thinkingLevel: "low",
|
||||
},
|
||||
});
|
||||
|
||||
expect(agent.state.systemPrompt).toBe("You are a helpful assistant.");
|
||||
expect(agent.state.model).toBe(customModel);
|
||||
expect(agent.state.thinkingLevel).toBe("low");
|
||||
});
|
||||
|
||||
it("should subscribe to state updates", () => {
|
||||
const agent = new Agent({
|
||||
transport: new ProviderTransport(),
|
||||
});
|
||||
|
||||
let updateCount = 0;
|
||||
const unsubscribe = agent.subscribe((event) => {
|
||||
if (event.type === "state-update") {
|
||||
updateCount++;
|
||||
}
|
||||
});
|
||||
|
||||
// Initial state update on subscribe
|
||||
expect(updateCount).toBe(1);
|
||||
|
||||
// Update state
|
||||
agent.setSystemPrompt("Test prompt");
|
||||
expect(updateCount).toBe(2);
|
||||
expect(agent.state.systemPrompt).toBe("Test prompt");
|
||||
|
||||
// Unsubscribe should work
|
||||
unsubscribe();
|
||||
agent.setSystemPrompt("Another prompt");
|
||||
expect(updateCount).toBe(2); // Should not increase
|
||||
});
|
||||
|
||||
it("should update state with mutators", () => {
|
||||
const agent = new Agent({
|
||||
transport: new ProviderTransport(),
|
||||
});
|
||||
|
||||
// Test setSystemPrompt
|
||||
agent.setSystemPrompt("Custom prompt");
|
||||
expect(agent.state.systemPrompt).toBe("Custom prompt");
|
||||
|
||||
// Test setModel
|
||||
const newModel = getModel("google", "gemini-2.5-flash");
|
||||
agent.setModel(newModel);
|
||||
expect(agent.state.model).toBe(newModel);
|
||||
|
||||
// Test setThinkingLevel
|
||||
agent.setThinkingLevel("high");
|
||||
expect(agent.state.thinkingLevel).toBe("high");
|
||||
|
||||
// Test setTools
|
||||
const tools = [{ name: "test", description: "test tool" } as any];
|
||||
agent.setTools(tools);
|
||||
expect(agent.state.tools).toBe(tools);
|
||||
|
||||
// Test replaceMessages
|
||||
const messages = [{ role: "user" as const, content: "Hello" }];
|
||||
agent.replaceMessages(messages);
|
||||
expect(agent.state.messages).toEqual(messages);
|
||||
expect(agent.state.messages).not.toBe(messages); // Should be a copy
|
||||
|
||||
// Test appendMessage
|
||||
const newMessage = { role: "assistant" as const, content: [{ type: "text" as const, text: "Hi" }] };
|
||||
agent.appendMessage(newMessage as any);
|
||||
expect(agent.state.messages).toHaveLength(2);
|
||||
expect(agent.state.messages[1]).toBe(newMessage);
|
||||
|
||||
// Test clearMessages
|
||||
agent.clearMessages();
|
||||
expect(agent.state.messages).toEqual([]);
|
||||
});
|
||||
|
||||
it("should support message queueing", async () => {
|
||||
const agent = new Agent({
|
||||
transport: new ProviderTransport(),
|
||||
});
|
||||
|
||||
const message = { role: "user" as const, content: "Queued message" };
|
||||
await agent.queueMessage(message);
|
||||
|
||||
// The message is queued but not yet in state.messages
|
||||
expect(agent.state.messages).not.toContainEqual(message);
|
||||
});
|
||||
|
||||
it("should handle abort controller", () => {
|
||||
const agent = new Agent({
|
||||
transport: new ProviderTransport(),
|
||||
});
|
||||
|
||||
// Should not throw even if nothing is running
|
||||
expect(() => agent.abort()).not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe("ProviderTransport", () => {
|
||||
it("should create a provider transport instance", () => {
|
||||
const transport = new ProviderTransport();
|
||||
expect(transport).toBeDefined();
|
||||
});
|
||||
|
||||
it("should create a provider transport with options", () => {
|
||||
const transport = new ProviderTransport({
|
||||
getApiKey: async (provider) => `test-key-${provider}`,
|
||||
corsProxyUrl: "https://proxy.example.com",
|
||||
});
|
||||
expect(transport).toBeDefined();
|
||||
});
|
||||
});
|
||||
402
packages/agent/test/e2e.test.ts
Normal file
402
packages/agent/test/e2e.test.ts
Normal file
|
|
@ -0,0 +1,402 @@
|
|||
import type { Model } from "@mariozechner/pi-ai";
|
||||
import { calculateTool, getModel } from "@mariozechner/pi-ai";
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { Agent, ProviderTransport } from "../src/index.js";
|
||||
|
||||
async function basicPrompt(model: Model<any>) {
|
||||
const agent = new Agent({
|
||||
initialState: {
|
||||
systemPrompt: "You are a helpful assistant. Keep your responses concise.",
|
||||
model,
|
||||
thinkingLevel: "off",
|
||||
tools: [],
|
||||
},
|
||||
transport: new ProviderTransport({
|
||||
getApiKey: async (provider) => {
|
||||
// Map provider names to env var names
|
||||
const envVarMap: Record<string, string> = {
|
||||
google: "GEMINI_API_KEY",
|
||||
openai: "OPENAI_API_KEY",
|
||||
anthropic: "ANTHROPIC_API_KEY",
|
||||
xai: "XAI_API_KEY",
|
||||
groq: "GROQ_API_KEY",
|
||||
cerebras: "CEREBRAS_API_KEY",
|
||||
zai: "ZAI_API_KEY",
|
||||
};
|
||||
const envVar = envVarMap[provider] || `${provider.toUpperCase()}_API_KEY`;
|
||||
return process.env[envVar];
|
||||
},
|
||||
}),
|
||||
});
|
||||
|
||||
await agent.prompt("What is 2+2? Answer with just the number.");
|
||||
|
||||
expect(agent.state.isStreaming).toBe(false);
|
||||
expect(agent.state.messages.length).toBe(2);
|
||||
expect(agent.state.messages[0].role).toBe("user");
|
||||
expect(agent.state.messages[1].role).toBe("assistant");
|
||||
|
||||
const assistantMessage = agent.state.messages[1];
|
||||
if (assistantMessage.role !== "assistant") throw new Error("Expected assistant message");
|
||||
expect(assistantMessage.content.length).toBeGreaterThan(0);
|
||||
|
||||
const textContent = assistantMessage.content.find((c) => c.type === "text");
|
||||
expect(textContent).toBeDefined();
|
||||
if (textContent?.type !== "text") throw new Error("Expected text content");
|
||||
expect(textContent.text).toContain("4");
|
||||
}
|
||||
|
||||
async function toolExecution(model: Model<any>) {
|
||||
const agent = new Agent({
|
||||
initialState: {
|
||||
systemPrompt: "You are a helpful assistant. Always use the calculator tool for math.",
|
||||
model,
|
||||
thinkingLevel: "off",
|
||||
tools: [calculateTool],
|
||||
},
|
||||
transport: new ProviderTransport({
|
||||
getApiKey: async (provider) => {
|
||||
// Map provider names to env var names
|
||||
const envVarMap: Record<string, string> = {
|
||||
google: "GEMINI_API_KEY",
|
||||
openai: "OPENAI_API_KEY",
|
||||
anthropic: "ANTHROPIC_API_KEY",
|
||||
xai: "XAI_API_KEY",
|
||||
groq: "GROQ_API_KEY",
|
||||
cerebras: "CEREBRAS_API_KEY",
|
||||
zai: "ZAI_API_KEY",
|
||||
};
|
||||
const envVar = envVarMap[provider] || `${provider.toUpperCase()}_API_KEY`;
|
||||
return process.env[envVar];
|
||||
},
|
||||
}),
|
||||
});
|
||||
|
||||
await agent.prompt("Calculate 123 * 456 using the calculator tool.");
|
||||
|
||||
expect(agent.state.isStreaming).toBe(false);
|
||||
expect(agent.state.messages.length).toBeGreaterThanOrEqual(3);
|
||||
|
||||
const toolResultMsg = agent.state.messages.find((m) => m.role === "toolResult");
|
||||
expect(toolResultMsg).toBeDefined();
|
||||
if (toolResultMsg?.role !== "toolResult") throw new Error("Expected tool result message");
|
||||
expect(toolResultMsg.output).toBeDefined();
|
||||
|
||||
const expectedResult = 123 * 456;
|
||||
expect(toolResultMsg.output).toContain(String(expectedResult));
|
||||
|
||||
const finalMessage = agent.state.messages[agent.state.messages.length - 1];
|
||||
if (finalMessage.role !== "assistant") throw new Error("Expected final assistant message");
|
||||
const finalText = finalMessage.content.find((c) => c.type === "text");
|
||||
expect(finalText).toBeDefined();
|
||||
if (finalText?.type !== "text") throw new Error("Expected text content");
|
||||
// Check for number with or without comma formatting
|
||||
const hasNumber =
|
||||
finalText.text.includes(String(expectedResult)) ||
|
||||
finalText.text.includes("56,088") ||
|
||||
finalText.text.includes("56088");
|
||||
expect(hasNumber).toBe(true);
|
||||
}
|
||||
|
||||
async function abortExecution(model: Model<any>) {
|
||||
const agent = new Agent({
|
||||
initialState: {
|
||||
systemPrompt: "You are a helpful assistant.",
|
||||
model,
|
||||
thinkingLevel: "off",
|
||||
tools: [calculateTool],
|
||||
},
|
||||
transport: new ProviderTransport({
|
||||
getApiKey: async (provider) => {
|
||||
// Map provider names to env var names
|
||||
const envVarMap: Record<string, string> = {
|
||||
google: "GEMINI_API_KEY",
|
||||
openai: "OPENAI_API_KEY",
|
||||
anthropic: "ANTHROPIC_API_KEY",
|
||||
xai: "XAI_API_KEY",
|
||||
groq: "GROQ_API_KEY",
|
||||
cerebras: "CEREBRAS_API_KEY",
|
||||
zai: "ZAI_API_KEY",
|
||||
};
|
||||
const envVar = envVarMap[provider] || `${provider.toUpperCase()}_API_KEY`;
|
||||
return process.env[envVar];
|
||||
},
|
||||
}),
|
||||
});
|
||||
|
||||
const promptPromise = agent.prompt("Calculate 100 * 200, then 300 * 400, then sum the results.");
|
||||
|
||||
setTimeout(() => {
|
||||
agent.abort();
|
||||
}, 100);
|
||||
|
||||
await promptPromise;
|
||||
|
||||
expect(agent.state.isStreaming).toBe(false);
|
||||
expect(agent.state.messages.length).toBeGreaterThanOrEqual(2);
|
||||
|
||||
const lastMessage = agent.state.messages[agent.state.messages.length - 1];
|
||||
if (lastMessage.role !== "assistant") throw new Error("Expected assistant message");
|
||||
expect(lastMessage.stopReason).toBe("aborted");
|
||||
expect(lastMessage.errorMessage).toBeDefined();
|
||||
}
|
||||
|
||||
async function stateUpdates(model: Model<any>) {
|
||||
const agent = new Agent({
|
||||
initialState: {
|
||||
systemPrompt: "You are a helpful assistant.",
|
||||
model,
|
||||
thinkingLevel: "off",
|
||||
tools: [],
|
||||
},
|
||||
transport: new ProviderTransport({
|
||||
getApiKey: async (provider) => {
|
||||
// Map provider names to env var names
|
||||
const envVarMap: Record<string, string> = {
|
||||
google: "GEMINI_API_KEY",
|
||||
openai: "OPENAI_API_KEY",
|
||||
anthropic: "ANTHROPIC_API_KEY",
|
||||
xai: "XAI_API_KEY",
|
||||
groq: "GROQ_API_KEY",
|
||||
cerebras: "CEREBRAS_API_KEY",
|
||||
zai: "ZAI_API_KEY",
|
||||
};
|
||||
const envVar = envVarMap[provider] || `${provider.toUpperCase()}_API_KEY`;
|
||||
return process.env[envVar];
|
||||
},
|
||||
}),
|
||||
});
|
||||
|
||||
const stateSnapshots: Array<{ isStreaming: boolean; messageCount: number; hasStreamMessage: boolean }> = [];
|
||||
|
||||
agent.subscribe((event) => {
|
||||
if (event.type === "state-update") {
|
||||
stateSnapshots.push({
|
||||
isStreaming: event.state.isStreaming,
|
||||
messageCount: event.state.messages.length,
|
||||
hasStreamMessage: event.state.streamMessage !== null,
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
await agent.prompt("Count from 1 to 5.");
|
||||
|
||||
const streamingStates = stateSnapshots.filter((s) => s.isStreaming);
|
||||
const nonStreamingStates = stateSnapshots.filter((s) => !s.isStreaming);
|
||||
|
||||
expect(streamingStates.length).toBeGreaterThan(0);
|
||||
expect(nonStreamingStates.length).toBeGreaterThan(0);
|
||||
|
||||
const finalState = stateSnapshots[stateSnapshots.length - 1];
|
||||
expect(finalState.isStreaming).toBe(false);
|
||||
expect(finalState.messageCount).toBe(2);
|
||||
}
|
||||
|
||||
async function multiTurnConversation(model: Model<any>) {
|
||||
const agent = new Agent({
|
||||
initialState: {
|
||||
systemPrompt: "You are a helpful assistant.",
|
||||
model,
|
||||
thinkingLevel: "off",
|
||||
tools: [],
|
||||
},
|
||||
transport: new ProviderTransport({
|
||||
getApiKey: async (provider) => {
|
||||
// Map provider names to env var names
|
||||
const envVarMap: Record<string, string> = {
|
||||
google: "GEMINI_API_KEY",
|
||||
openai: "OPENAI_API_KEY",
|
||||
anthropic: "ANTHROPIC_API_KEY",
|
||||
xai: "XAI_API_KEY",
|
||||
groq: "GROQ_API_KEY",
|
||||
cerebras: "CEREBRAS_API_KEY",
|
||||
zai: "ZAI_API_KEY",
|
||||
};
|
||||
const envVar = envVarMap[provider] || `${provider.toUpperCase()}_API_KEY`;
|
||||
return process.env[envVar];
|
||||
},
|
||||
}),
|
||||
});
|
||||
|
||||
await agent.prompt("My name is Alice.");
|
||||
expect(agent.state.messages.length).toBe(2);
|
||||
|
||||
await agent.prompt("What is my name?");
|
||||
expect(agent.state.messages.length).toBe(4);
|
||||
|
||||
const lastMessage = agent.state.messages[3];
|
||||
if (lastMessage.role !== "assistant") throw new Error("Expected assistant message");
|
||||
const lastText = lastMessage.content.find((c) => c.type === "text");
|
||||
if (lastText?.type !== "text") throw new Error("Expected text content");
|
||||
expect(lastText.text.toLowerCase()).toContain("alice");
|
||||
}
|
||||
|
||||
describe("Agent E2E Tests", () => {
|
||||
describe.skipIf(!process.env.GEMINI_API_KEY)("Google Provider (gemini-2.5-flash)", () => {
|
||||
const model = getModel("google", "gemini-2.5-flash");
|
||||
|
||||
it("should handle basic text prompt", async () => {
|
||||
await basicPrompt(model);
|
||||
});
|
||||
|
||||
it("should execute tools correctly", async () => {
|
||||
await toolExecution(model);
|
||||
});
|
||||
|
||||
it("should handle abort during execution", async () => {
|
||||
await abortExecution(model);
|
||||
});
|
||||
|
||||
it("should emit state updates during streaming", async () => {
|
||||
await stateUpdates(model);
|
||||
});
|
||||
|
||||
it("should maintain context across multiple turns", async () => {
|
||||
await multiTurnConversation(model);
|
||||
});
|
||||
});
|
||||
|
||||
describe.skipIf(!process.env.OPENAI_API_KEY)("OpenAI Provider (gpt-4o-mini)", () => {
|
||||
const model = getModel("openai", "gpt-4o-mini");
|
||||
|
||||
it("should handle basic text prompt", async () => {
|
||||
await basicPrompt(model);
|
||||
});
|
||||
|
||||
it("should execute tools correctly", async () => {
|
||||
await toolExecution(model);
|
||||
});
|
||||
|
||||
it("should handle abort during execution", async () => {
|
||||
await abortExecution(model);
|
||||
});
|
||||
|
||||
it("should emit state updates during streaming", async () => {
|
||||
await stateUpdates(model);
|
||||
});
|
||||
|
||||
it("should maintain context across multiple turns", async () => {
|
||||
await multiTurnConversation(model);
|
||||
});
|
||||
});
|
||||
|
||||
describe.skipIf(!process.env.ANTHROPIC_API_KEY)("Anthropic Provider (claude-3-5-haiku-20241022)", () => {
|
||||
const model = getModel("anthropic", "claude-3-5-haiku-20241022");
|
||||
|
||||
it("should handle basic text prompt", async () => {
|
||||
await basicPrompt(model);
|
||||
});
|
||||
|
||||
it("should execute tools correctly", async () => {
|
||||
await toolExecution(model);
|
||||
});
|
||||
|
||||
it("should handle abort during execution", async () => {
|
||||
await abortExecution(model);
|
||||
});
|
||||
|
||||
it("should emit state updates during streaming", async () => {
|
||||
await stateUpdates(model);
|
||||
});
|
||||
|
||||
it("should maintain context across multiple turns", async () => {
|
||||
await multiTurnConversation(model);
|
||||
});
|
||||
});
|
||||
|
||||
describe.skipIf(!process.env.XAI_API_KEY)("xAI Provider (grok-3)", () => {
|
||||
const model = getModel("xai", "grok-3");
|
||||
|
||||
it("should handle basic text prompt", async () => {
|
||||
await basicPrompt(model);
|
||||
});
|
||||
|
||||
it("should execute tools correctly", async () => {
|
||||
await toolExecution(model);
|
||||
});
|
||||
|
||||
it("should handle abort during execution", async () => {
|
||||
await abortExecution(model);
|
||||
});
|
||||
|
||||
it("should emit state updates during streaming", async () => {
|
||||
await stateUpdates(model);
|
||||
});
|
||||
|
||||
it("should maintain context across multiple turns", async () => {
|
||||
await multiTurnConversation(model);
|
||||
});
|
||||
});
|
||||
|
||||
describe.skipIf(!process.env.GROQ_API_KEY)("Groq Provider (openai/gpt-oss-20b)", () => {
|
||||
const model = getModel("groq", "openai/gpt-oss-20b");
|
||||
|
||||
it("should handle basic text prompt", async () => {
|
||||
await basicPrompt(model);
|
||||
});
|
||||
|
||||
it("should execute tools correctly", async () => {
|
||||
await toolExecution(model);
|
||||
});
|
||||
|
||||
it("should handle abort during execution", async () => {
|
||||
await abortExecution(model);
|
||||
});
|
||||
|
||||
it("should emit state updates during streaming", async () => {
|
||||
await stateUpdates(model);
|
||||
});
|
||||
|
||||
it("should maintain context across multiple turns", async () => {
|
||||
await multiTurnConversation(model);
|
||||
});
|
||||
});
|
||||
|
||||
describe.skipIf(!process.env.CEREBRAS_API_KEY)("Cerebras Provider (gpt-oss-120b)", () => {
|
||||
const model = getModel("cerebras", "gpt-oss-120b");
|
||||
|
||||
it("should handle basic text prompt", async () => {
|
||||
await basicPrompt(model);
|
||||
});
|
||||
|
||||
it("should execute tools correctly", async () => {
|
||||
await toolExecution(model);
|
||||
});
|
||||
|
||||
it("should handle abort during execution", async () => {
|
||||
await abortExecution(model);
|
||||
});
|
||||
|
||||
it("should emit state updates during streaming", async () => {
|
||||
await stateUpdates(model);
|
||||
});
|
||||
|
||||
it("should maintain context across multiple turns", async () => {
|
||||
await multiTurnConversation(model);
|
||||
});
|
||||
});
|
||||
|
||||
describe.skipIf(!process.env.ZAI_API_KEY)("zAI Provider (glm-4.5-air)", () => {
|
||||
const model = getModel("zai", "glm-4.5-air");
|
||||
|
||||
it("should handle basic text prompt", async () => {
|
||||
await basicPrompt(model);
|
||||
});
|
||||
|
||||
it("should execute tools correctly", async () => {
|
||||
await toolExecution(model);
|
||||
});
|
||||
|
||||
it("should handle abort during execution", async () => {
|
||||
await abortExecution(model);
|
||||
});
|
||||
|
||||
it("should emit state updates during streaming", async () => {
|
||||
await stateUpdates(model);
|
||||
});
|
||||
|
||||
it("should maintain context across multiple turns", async () => {
|
||||
await multiTurnConversation(model);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
|
@ -4,6 +4,6 @@
|
|||
"outDir": "./dist",
|
||||
"rootDir": "./src"
|
||||
},
|
||||
"include": ["src/**/*"],
|
||||
"exclude": ["node_modules", "dist"]
|
||||
}
|
||||
"include": ["src/**/*.ts"],
|
||||
"exclude": ["node_modules", "dist", "**/*.d.ts", "src/**/*.d.ts"]
|
||||
}
|
||||
|
|
|
|||
9
packages/agent/vitest.config.ts
Normal file
9
packages/agent/vitest.config.ts
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
import { defineConfig } from "vitest/config";
|
||||
|
||||
export default defineConfig({
|
||||
test: {
|
||||
globals: true,
|
||||
environment: "node",
|
||||
testTimeout: 10000, // 10 seconds
|
||||
},
|
||||
});
|
||||
File diff suppressed because it is too large
Load diff
52
packages/coding-agent/package.json
Normal file
52
packages/coding-agent/package.json
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
{
|
||||
"name": "@mariozechner/coding-agent",
|
||||
"version": "0.5.44",
|
||||
"description": "Coding agent CLI with read, bash, edit, write tools and session management",
|
||||
"type": "module",
|
||||
"bin": {
|
||||
"coding-agent": "dist/cli.js"
|
||||
},
|
||||
"main": "./dist/index.js",
|
||||
"types": "./dist/index.d.ts",
|
||||
"files": [
|
||||
"dist"
|
||||
],
|
||||
"scripts": {
|
||||
"clean": "rm -rf dist",
|
||||
"build": "tsc -p tsconfig.build.json && chmod +x dist/cli.js",
|
||||
"dev": "tsc -p tsconfig.build.json --watch --preserveWatchOutput",
|
||||
"check": "tsc --noEmit",
|
||||
"test": "vitest --run",
|
||||
"prepublishOnly": "npm run clean && npm run build"
|
||||
},
|
||||
"dependencies": {
|
||||
"@mariozechner/pi-agent": "^0.5.44",
|
||||
"@mariozechner/pi-ai": "^0.5.44",
|
||||
"@mariozechner/pi-tui": "^0.5.44",
|
||||
"chalk": "^5.5.0",
|
||||
"glob": "^11.0.3"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^24.3.0",
|
||||
"typescript": "^5.7.3",
|
||||
"vitest": "^3.2.4"
|
||||
},
|
||||
"keywords": [
|
||||
"coding-agent",
|
||||
"ai",
|
||||
"llm",
|
||||
"cli",
|
||||
"tui",
|
||||
"agent"
|
||||
],
|
||||
"author": "Mario Zechner",
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+https://github.com/badlogic/pi-mono.git",
|
||||
"directory": "packages/coding-agent"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=20.0.0"
|
||||
}
|
||||
}
|
||||
8
packages/coding-agent/src/cli.ts
Normal file
8
packages/coding-agent/src/cli.ts
Normal file
|
|
@ -0,0 +1,8 @@
|
|||
#!/usr/bin/env node
|
||||
|
||||
import { main } from "./main.js";
|
||||
|
||||
main(process.argv.slice(2)).catch((err) => {
|
||||
console.error(err);
|
||||
process.exit(1);
|
||||
});
|
||||
3
packages/coding-agent/src/index.ts
Normal file
3
packages/coding-agent/src/index.ts
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
export { main } from "./main.js";
|
||||
export { SessionManager } from "./session-manager.js";
|
||||
export { bashTool, codingTools, editTool, readTool, writeTool } from "./tools/index.js";
|
||||
209
packages/coding-agent/src/main.ts
Normal file
209
packages/coding-agent/src/main.ts
Normal file
|
|
@ -0,0 +1,209 @@
|
|||
import { Agent, ProviderTransport } from "@mariozechner/pi-agent";
|
||||
import { getModel } from "@mariozechner/pi-ai";
|
||||
import chalk from "chalk";
|
||||
import { SessionManager } from "./session-manager.js";
|
||||
import { codingTools } from "./tools/index.js";
|
||||
|
||||
interface Args {
|
||||
provider?: string;
|
||||
model?: string;
|
||||
apiKey?: string;
|
||||
systemPrompt?: string;
|
||||
continue?: boolean;
|
||||
help?: boolean;
|
||||
messages: string[];
|
||||
}
|
||||
|
||||
function parseArgs(args: string[]): Args {
|
||||
const result: Args = {
|
||||
messages: [],
|
||||
};
|
||||
|
||||
for (let i = 0; i < args.length; i++) {
|
||||
const arg = args[i];
|
||||
|
||||
if (arg === "--help" || arg === "-h") {
|
||||
result.help = true;
|
||||
} else if (arg === "--continue" || arg === "-c") {
|
||||
result.continue = true;
|
||||
} else if (arg === "--provider" && i + 1 < args.length) {
|
||||
result.provider = args[++i];
|
||||
} else if (arg === "--model" && i + 1 < args.length) {
|
||||
result.model = args[++i];
|
||||
} else if (arg === "--api-key" && i + 1 < args.length) {
|
||||
result.apiKey = args[++i];
|
||||
} else if (arg === "--system-prompt" && i + 1 < args.length) {
|
||||
result.systemPrompt = args[++i];
|
||||
} else if (!arg.startsWith("-")) {
|
||||
result.messages.push(arg);
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
function printHelp() {
|
||||
console.log(`${chalk.bold("coding-agent")} - AI coding assistant with read, bash, edit, write tools
|
||||
|
||||
${chalk.bold("Usage:")}
|
||||
coding-agent [options] [messages...]
|
||||
|
||||
${chalk.bold("Options:")}
|
||||
--provider <name> Provider name (default: google)
|
||||
--model <id> Model ID (default: gemini-2.5-flash)
|
||||
--api-key <key> API key (defaults to env vars)
|
||||
--system-prompt <text> System prompt (default: coding assistant prompt)
|
||||
--continue, -c Continue previous session
|
||||
--help, -h Show this help
|
||||
|
||||
${chalk.bold("Examples:")}
|
||||
# Single message
|
||||
coding-agent "List all .ts files in src/"
|
||||
|
||||
# Multiple messages
|
||||
coding-agent "Read package.json" "What dependencies do we have?"
|
||||
|
||||
# Continue previous session
|
||||
coding-agent --continue "What did we discuss?"
|
||||
|
||||
# Use different model
|
||||
coding-agent --provider openai --model gpt-4o-mini "Help me refactor this code"
|
||||
|
||||
${chalk.bold("Environment Variables:")}
|
||||
GEMINI_API_KEY - Google Gemini API key
|
||||
OPENAI_API_KEY - OpenAI API key
|
||||
ANTHROPIC_API_KEY - Anthropic API key
|
||||
CODING_AGENT_DIR - Session storage directory (default: ~/.coding-agent)
|
||||
|
||||
${chalk.bold("Available Tools:")}
|
||||
read - Read file contents
|
||||
bash - Execute bash commands
|
||||
edit - Edit files with find/replace
|
||||
write - Write files (creates/overwrites)
|
||||
`);
|
||||
}
|
||||
|
||||
const DEFAULT_SYSTEM_PROMPT = `You are an expert coding assistant. You help users with coding tasks by reading files, executing commands, editing code, and writing new files.
|
||||
|
||||
Available tools:
|
||||
- read: Read file contents
|
||||
- bash: Execute bash commands (ls, grep, find, etc.)
|
||||
- edit: Make surgical edits to files (find exact text and replace)
|
||||
- write: Create or overwrite files
|
||||
|
||||
Guidelines:
|
||||
- Always use bash tool for file operations like ls, grep, find
|
||||
- Use read to examine files before editing
|
||||
- Use edit for precise changes (old text must match exactly)
|
||||
- Use write only for new files or complete rewrites
|
||||
- Be concise in your responses
|
||||
- Show file paths clearly when working with files
|
||||
|
||||
Current directory: ${process.cwd()}`;
|
||||
|
||||
export async function main(args: string[]) {
|
||||
const parsed = parseArgs(args);
|
||||
|
||||
if (parsed.help) {
|
||||
printHelp();
|
||||
return;
|
||||
}
|
||||
|
||||
// Setup session manager
|
||||
const sessionManager = new SessionManager(parsed.continue);
|
||||
|
||||
// Determine provider and model
|
||||
const provider = (parsed.provider || "google") as any;
|
||||
const modelId = parsed.model || "gemini-2.5-flash";
|
||||
|
||||
// Get API key
|
||||
let apiKey = parsed.apiKey;
|
||||
if (!apiKey) {
|
||||
const envVarMap: Record<string, string> = {
|
||||
google: "GEMINI_API_KEY",
|
||||
openai: "OPENAI_API_KEY",
|
||||
anthropic: "ANTHROPIC_API_KEY",
|
||||
xai: "XAI_API_KEY",
|
||||
groq: "GROQ_API_KEY",
|
||||
cerebras: "CEREBRAS_API_KEY",
|
||||
zai: "ZAI_API_KEY",
|
||||
};
|
||||
const envVar = envVarMap[provider] || `${provider.toUpperCase()}_API_KEY`;
|
||||
apiKey = process.env[envVar];
|
||||
|
||||
if (!apiKey) {
|
||||
console.error(chalk.red(`Error: No API key found for provider "${provider}"`));
|
||||
console.error(chalk.dim(`Set ${envVar} environment variable or use --api-key flag`));
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Create agent
|
||||
const model = getModel(provider, modelId);
|
||||
const systemPrompt = parsed.systemPrompt || DEFAULT_SYSTEM_PROMPT;
|
||||
|
||||
const agent = new Agent({
|
||||
initialState: {
|
||||
systemPrompt,
|
||||
model,
|
||||
thinkingLevel: "off",
|
||||
tools: codingTools,
|
||||
},
|
||||
transport: new ProviderTransport({
|
||||
getApiKey: async () => apiKey!,
|
||||
}),
|
||||
});
|
||||
|
||||
// Load previous messages if continuing
|
||||
if (parsed.continue) {
|
||||
const messages = sessionManager.loadMessages();
|
||||
if (messages.length > 0) {
|
||||
console.log(chalk.dim(`Loaded ${messages.length} messages from previous session`));
|
||||
agent.replaceMessages(messages);
|
||||
}
|
||||
}
|
||||
|
||||
// Start session
|
||||
sessionManager.startSession(agent.state);
|
||||
|
||||
// Subscribe to state updates to save messages
|
||||
agent.subscribe((event) => {
|
||||
if (event.type === "state-update") {
|
||||
// Save any new messages
|
||||
const currentMessages = event.state.messages;
|
||||
const loadedMessages = sessionManager.loadMessages();
|
||||
|
||||
if (currentMessages.length > loadedMessages.length) {
|
||||
for (let i = loadedMessages.length; i < currentMessages.length; i++) {
|
||||
sessionManager.saveMessage(currentMessages[i]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
sessionManager.saveEvent(event);
|
||||
});
|
||||
|
||||
// Process messages
|
||||
if (parsed.messages.length === 0) {
|
||||
console.log(chalk.yellow("No messages provided. Use --help for usage information."));
|
||||
console.log(chalk.dim(`Session saved to: ${sessionManager.getSessionFile()}`));
|
||||
return;
|
||||
}
|
||||
|
||||
for (const message of parsed.messages) {
|
||||
console.log(chalk.blue(`\n> ${message}\n`));
|
||||
await agent.prompt(message);
|
||||
|
||||
// Print response
|
||||
const lastMessage = agent.state.messages[agent.state.messages.length - 1];
|
||||
if (lastMessage.role === "assistant") {
|
||||
for (const content of lastMessage.content) {
|
||||
if (content.type === "text") {
|
||||
console.log(content.text);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
console.log(chalk.dim(`\nSession saved to: ${sessionManager.getSessionFile()}`));
|
||||
}
|
||||
167
packages/coding-agent/src/session-manager.ts
Normal file
167
packages/coding-agent/src/session-manager.ts
Normal file
|
|
@ -0,0 +1,167 @@
|
|||
import type { AgentEvent, AgentState } from "@mariozechner/pi-agent";
|
||||
import { randomBytes } from "crypto";
|
||||
import { appendFileSync, existsSync, mkdirSync, readdirSync, readFileSync, statSync } from "fs";
|
||||
import { homedir } from "os";
|
||||
import { join, resolve } from "path";
|
||||
|
||||
function uuidv4(): string {
|
||||
const bytes = randomBytes(16);
|
||||
bytes[6] = (bytes[6] & 0x0f) | 0x40;
|
||||
bytes[8] = (bytes[8] & 0x3f) | 0x80;
|
||||
const hex = bytes.toString("hex");
|
||||
return `${hex.slice(0, 8)}-${hex.slice(8, 12)}-${hex.slice(12, 16)}-${hex.slice(16, 20)}-${hex.slice(20, 32)}`;
|
||||
}
|
||||
|
||||
export interface SessionHeader {
|
||||
type: "session";
|
||||
id: string;
|
||||
timestamp: string;
|
||||
cwd: string;
|
||||
systemPrompt: string;
|
||||
model: string;
|
||||
}
|
||||
|
||||
export interface SessionMessageEntry {
|
||||
type: "message";
|
||||
timestamp: string;
|
||||
message: any; // AppMessage from agent state
|
||||
}
|
||||
|
||||
export interface SessionEventEntry {
|
||||
type: "event";
|
||||
timestamp: string;
|
||||
event: AgentEvent;
|
||||
}
|
||||
|
||||
export class SessionManager {
|
||||
private sessionId!: string;
|
||||
private sessionFile!: string;
|
||||
private sessionDir: string;
|
||||
|
||||
constructor(continueSession: boolean = false) {
|
||||
this.sessionDir = this.getSessionDirectory();
|
||||
|
||||
if (continueSession) {
|
||||
const mostRecent = this.findMostRecentlyModifiedSession();
|
||||
if (mostRecent) {
|
||||
this.sessionFile = mostRecent;
|
||||
this.loadSessionId();
|
||||
} else {
|
||||
this.initNewSession();
|
||||
}
|
||||
} else {
|
||||
this.initNewSession();
|
||||
}
|
||||
}
|
||||
|
||||
private getSessionDirectory(): string {
|
||||
const cwd = process.cwd();
|
||||
const safePath = "--" + cwd.replace(/^\//, "").replace(/\//g, "-") + "--";
|
||||
|
||||
const configDir = resolve(process.env.CODING_AGENT_DIR || join(homedir(), ".coding-agent"));
|
||||
const sessionDir = join(configDir, "sessions", safePath);
|
||||
if (!existsSync(sessionDir)) {
|
||||
mkdirSync(sessionDir, { recursive: true });
|
||||
}
|
||||
return sessionDir;
|
||||
}
|
||||
|
||||
private initNewSession(): void {
|
||||
this.sessionId = uuidv4();
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, "-");
|
||||
this.sessionFile = join(this.sessionDir, `${timestamp}_${this.sessionId}.jsonl`);
|
||||
}
|
||||
|
||||
private findMostRecentlyModifiedSession(): string | null {
|
||||
try {
|
||||
const files = readdirSync(this.sessionDir)
|
||||
.filter((f) => f.endsWith(".jsonl"))
|
||||
.map((f) => ({
|
||||
name: f,
|
||||
path: join(this.sessionDir, f),
|
||||
mtime: statSync(join(this.sessionDir, f)).mtime,
|
||||
}))
|
||||
.sort((a, b) => b.mtime.getTime() - a.mtime.getTime());
|
||||
|
||||
return files[0]?.path || null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
private loadSessionId(): void {
|
||||
if (!existsSync(this.sessionFile)) return;
|
||||
|
||||
const lines = readFileSync(this.sessionFile, "utf8").trim().split("\n");
|
||||
for (const line of lines) {
|
||||
try {
|
||||
const entry = JSON.parse(line);
|
||||
if (entry.type === "session") {
|
||||
this.sessionId = entry.id;
|
||||
return;
|
||||
}
|
||||
} catch {
|
||||
// Skip malformed lines
|
||||
}
|
||||
}
|
||||
this.sessionId = uuidv4();
|
||||
}
|
||||
|
||||
startSession(state: AgentState): void {
|
||||
const entry: SessionHeader = {
|
||||
type: "session",
|
||||
id: this.sessionId,
|
||||
timestamp: new Date().toISOString(),
|
||||
cwd: process.cwd(),
|
||||
systemPrompt: state.systemPrompt,
|
||||
model: `${state.model.provider}/${state.model.id}`,
|
||||
};
|
||||
appendFileSync(this.sessionFile, JSON.stringify(entry) + "\n");
|
||||
}
|
||||
|
||||
saveMessage(message: any): void {
|
||||
const entry: SessionMessageEntry = {
|
||||
type: "message",
|
||||
timestamp: new Date().toISOString(),
|
||||
message,
|
||||
};
|
||||
appendFileSync(this.sessionFile, JSON.stringify(entry) + "\n");
|
||||
}
|
||||
|
||||
saveEvent(event: AgentEvent): void {
|
||||
const entry: SessionEventEntry = {
|
||||
type: "event",
|
||||
timestamp: new Date().toISOString(),
|
||||
event,
|
||||
};
|
||||
appendFileSync(this.sessionFile, JSON.stringify(entry) + "\n");
|
||||
}
|
||||
|
||||
loadMessages(): any[] {
|
||||
if (!existsSync(this.sessionFile)) return [];
|
||||
|
||||
const messages: any[] = [];
|
||||
const lines = readFileSync(this.sessionFile, "utf8").trim().split("\n");
|
||||
|
||||
for (const line of lines) {
|
||||
try {
|
||||
const entry = JSON.parse(line);
|
||||
if (entry.type === "message") {
|
||||
messages.push(entry.message);
|
||||
}
|
||||
} catch {
|
||||
// Skip malformed lines
|
||||
}
|
||||
}
|
||||
|
||||
return messages;
|
||||
}
|
||||
|
||||
getSessionId(): string {
|
||||
return this.sessionId;
|
||||
}
|
||||
|
||||
getSessionFile(): string {
|
||||
return this.sessionFile;
|
||||
}
|
||||
}
|
||||
37
packages/coding-agent/src/tools/bash.ts
Normal file
37
packages/coding-agent/src/tools/bash.ts
Normal file
|
|
@ -0,0 +1,37 @@
|
|||
import type { AgentTool } from "@mariozechner/pi-ai";
|
||||
import { Type } from "@sinclair/typebox";
|
||||
import { exec } from "child_process";
|
||||
import { promisify } from "util";
|
||||
|
||||
const execAsync = promisify(exec);
|
||||
|
||||
const bashSchema = Type.Object({
|
||||
command: Type.String({ description: "Bash command to execute" }),
|
||||
});
|
||||
|
||||
export const bashTool: AgentTool<typeof bashSchema> = {
|
||||
name: "bash",
|
||||
label: "bash",
|
||||
description:
|
||||
"Execute a bash command in the current working directory. Returns stdout and stderr. Commands run with a 30 second timeout.",
|
||||
parameters: bashSchema,
|
||||
execute: async (_toolCallId: string, { command }: { command: string }) => {
|
||||
try {
|
||||
const { stdout, stderr } = await execAsync(command, {
|
||||
timeout: 30000,
|
||||
maxBuffer: 10 * 1024 * 1024, // 10MB
|
||||
});
|
||||
|
||||
let output = "";
|
||||
if (stdout) output += stdout;
|
||||
if (stderr) output += stderr ? `\nSTDERR:\n${stderr}` : "";
|
||||
|
||||
return { output: output || "(no output)", details: undefined };
|
||||
} catch (error: any) {
|
||||
return {
|
||||
output: `Error executing command: ${error.message}\nSTDOUT: ${error.stdout || ""}\nSTDERR: ${error.stderr || ""}`,
|
||||
details: undefined,
|
||||
};
|
||||
}
|
||||
},
|
||||
};
|
||||
61
packages/coding-agent/src/tools/edit.ts
Normal file
61
packages/coding-agent/src/tools/edit.ts
Normal file
|
|
@ -0,0 +1,61 @@
|
|||
import type { AgentTool } from "@mariozechner/pi-ai";
|
||||
import { Type } from "@sinclair/typebox";
|
||||
import { existsSync, readFileSync, writeFileSync } from "fs";
|
||||
import { resolve } from "path";
|
||||
|
||||
const editSchema = Type.Object({
|
||||
path: Type.String({ description: "Path to the file to edit (relative or absolute)" }),
|
||||
oldText: Type.String({ description: "Exact text to find and replace (must match exactly)" }),
|
||||
newText: Type.String({ description: "New text to replace the old text with" }),
|
||||
});
|
||||
|
||||
export const editTool: AgentTool<typeof editSchema> = {
|
||||
name: "edit",
|
||||
label: "edit",
|
||||
description:
|
||||
"Edit a file by replacing exact text. The oldText must match exactly (including whitespace). Use this for precise, surgical edits.",
|
||||
parameters: editSchema,
|
||||
execute: async (
|
||||
_toolCallId: string,
|
||||
{ path, oldText, newText }: { path: string; oldText: string; newText: string },
|
||||
) => {
|
||||
try {
|
||||
const absolutePath = resolve(path);
|
||||
|
||||
if (!existsSync(absolutePath)) {
|
||||
return { output: `Error: File not found: ${path}`, details: undefined };
|
||||
}
|
||||
|
||||
const content = readFileSync(absolutePath, "utf-8");
|
||||
|
||||
// Check if old text exists
|
||||
if (!content.includes(oldText)) {
|
||||
return {
|
||||
output: `Error: Could not find the exact text in ${path}. The old text must match exactly including all whitespace and newlines.`,
|
||||
details: undefined,
|
||||
};
|
||||
}
|
||||
|
||||
// Count occurrences
|
||||
const occurrences = content.split(oldText).length - 1;
|
||||
|
||||
if (occurrences > 1) {
|
||||
return {
|
||||
output: `Error: Found ${occurrences} occurrences of the text in ${path}. The text must be unique. Please provide more context to make it unique.`,
|
||||
details: undefined,
|
||||
};
|
||||
}
|
||||
|
||||
// Perform replacement
|
||||
const newContent = content.replace(oldText, newText);
|
||||
writeFileSync(absolutePath, newContent, "utf-8");
|
||||
|
||||
return {
|
||||
output: `Successfully replaced text in ${path}. Changed ${oldText.length} characters to ${newText.length} characters.`,
|
||||
details: undefined,
|
||||
};
|
||||
} catch (error: any) {
|
||||
return { output: `Error editing file: ${error.message}`, details: undefined };
|
||||
}
|
||||
},
|
||||
};
|
||||
11
packages/coding-agent/src/tools/index.ts
Normal file
11
packages/coding-agent/src/tools/index.ts
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
export { bashTool } from "./bash.js";
|
||||
export { editTool } from "./edit.js";
|
||||
export { readTool } from "./read.js";
|
||||
export { writeTool } from "./write.js";
|
||||
|
||||
import { bashTool } from "./bash.js";
|
||||
import { editTool } from "./edit.js";
|
||||
import { readTool } from "./read.js";
|
||||
import { writeTool } from "./write.js";
|
||||
|
||||
export const codingTools = [readTool, bashTool, editTool, writeTool];
|
||||
29
packages/coding-agent/src/tools/read.ts
Normal file
29
packages/coding-agent/src/tools/read.ts
Normal file
|
|
@ -0,0 +1,29 @@
|
|||
import type { AgentTool } from "@mariozechner/pi-ai";
|
||||
import { Type } from "@sinclair/typebox";
|
||||
import { existsSync, readFileSync } from "fs";
|
||||
import { resolve } from "path";
|
||||
|
||||
const readSchema = Type.Object({
|
||||
path: Type.String({ description: "Path to the file to read (relative or absolute)" }),
|
||||
});
|
||||
|
||||
export const readTool: AgentTool<typeof readSchema> = {
|
||||
name: "read",
|
||||
label: "read",
|
||||
description: "Read the contents of a file. Returns the full file content as text.",
|
||||
parameters: readSchema,
|
||||
execute: async (_toolCallId: string, { path }: { path: string }) => {
|
||||
try {
|
||||
const absolutePath = resolve(path);
|
||||
|
||||
if (!existsSync(absolutePath)) {
|
||||
return { output: `Error: File not found: ${path}`, details: undefined };
|
||||
}
|
||||
|
||||
const content = readFileSync(absolutePath, "utf-8");
|
||||
return { output: content, details: undefined };
|
||||
} catch (error: any) {
|
||||
return { output: `Error reading file: ${error.message}`, details: undefined };
|
||||
}
|
||||
},
|
||||
};
|
||||
31
packages/coding-agent/src/tools/write.ts
Normal file
31
packages/coding-agent/src/tools/write.ts
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
import type { AgentTool } from "@mariozechner/pi-ai";
|
||||
import { Type } from "@sinclair/typebox";
|
||||
import { mkdirSync, writeFileSync } from "fs";
|
||||
import { dirname, resolve } from "path";
|
||||
|
||||
const writeSchema = Type.Object({
|
||||
path: Type.String({ description: "Path to the file to write (relative or absolute)" }),
|
||||
content: Type.String({ description: "Content to write to the file" }),
|
||||
});
|
||||
|
||||
export const writeTool: AgentTool<typeof writeSchema> = {
|
||||
name: "write",
|
||||
label: "write",
|
||||
description:
|
||||
"Write content to a file. Creates the file if it doesn't exist, overwrites if it does. Automatically creates parent directories.",
|
||||
parameters: writeSchema,
|
||||
execute: async (_toolCallId: string, { path, content }: { path: string; content: string }) => {
|
||||
try {
|
||||
const absolutePath = resolve(path);
|
||||
const dir = dirname(absolutePath);
|
||||
|
||||
// Create parent directories if needed
|
||||
mkdirSync(dir, { recursive: true });
|
||||
|
||||
writeFileSync(absolutePath, content, "utf-8");
|
||||
return { output: `Successfully wrote ${content.length} bytes to ${path}`, details: undefined };
|
||||
} catch (error: any) {
|
||||
return { output: `Error writing file: ${error.message}`, details: undefined };
|
||||
}
|
||||
},
|
||||
};
|
||||
133
packages/coding-agent/test/tools.test.ts
Normal file
133
packages/coding-agent/test/tools.test.ts
Normal file
|
|
@ -0,0 +1,133 @@
|
|||
import { mkdirSync, rmSync, writeFileSync } from "fs";
|
||||
import { tmpdir } from "os";
|
||||
import { join } from "path";
|
||||
import { afterEach, beforeEach, describe, expect, it } from "vitest";
|
||||
import { bashTool } from "../src/tools/bash.js";
|
||||
import { editTool } from "../src/tools/edit.js";
|
||||
import { readTool } from "../src/tools/read.js";
|
||||
import { writeTool } from "../src/tools/write.js";
|
||||
|
||||
describe("Coding Agent Tools", () => {
|
||||
let testDir: string;
|
||||
|
||||
beforeEach(() => {
|
||||
// Create a unique temporary directory for each test
|
||||
testDir = join(tmpdir(), `coding-agent-test-${Date.now()}`);
|
||||
mkdirSync(testDir, { recursive: true });
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up test directory
|
||||
rmSync(testDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
describe("read tool", () => {
|
||||
it("should read file contents", async () => {
|
||||
const testFile = join(testDir, "test.txt");
|
||||
const content = "Hello, world!";
|
||||
writeFileSync(testFile, content);
|
||||
|
||||
const result = await readTool.execute("test-call-1", { path: testFile });
|
||||
|
||||
expect(result.output).toBe(content);
|
||||
expect(result.details).toBeUndefined();
|
||||
});
|
||||
|
||||
it("should handle non-existent files", async () => {
|
||||
const testFile = join(testDir, "nonexistent.txt");
|
||||
|
||||
const result = await readTool.execute("test-call-2", { path: testFile });
|
||||
|
||||
expect(result.output).toContain("Error");
|
||||
expect(result.output).toContain("File not found");
|
||||
});
|
||||
});
|
||||
|
||||
describe("write tool", () => {
|
||||
it("should write file contents", async () => {
|
||||
const testFile = join(testDir, "write-test.txt");
|
||||
const content = "Test content";
|
||||
|
||||
const result = await writeTool.execute("test-call-3", { path: testFile, content });
|
||||
|
||||
expect(result.output).toContain("Successfully wrote");
|
||||
expect(result.output).toContain(testFile);
|
||||
expect(result.details).toBeUndefined();
|
||||
});
|
||||
|
||||
it("should create parent directories", async () => {
|
||||
const testFile = join(testDir, "nested", "dir", "test.txt");
|
||||
const content = "Nested content";
|
||||
|
||||
const result = await writeTool.execute("test-call-4", { path: testFile, content });
|
||||
|
||||
expect(result.output).toContain("Successfully wrote");
|
||||
});
|
||||
});
|
||||
|
||||
describe("edit tool", () => {
|
||||
it("should replace text in file", async () => {
|
||||
const testFile = join(testDir, "edit-test.txt");
|
||||
const originalContent = "Hello, world!";
|
||||
writeFileSync(testFile, originalContent);
|
||||
|
||||
const result = await editTool.execute("test-call-5", {
|
||||
path: testFile,
|
||||
oldText: "world",
|
||||
newText: "testing",
|
||||
});
|
||||
|
||||
expect(result.output).toContain("Successfully replaced");
|
||||
expect(result.details).toBeUndefined();
|
||||
});
|
||||
|
||||
it("should fail if text not found", async () => {
|
||||
const testFile = join(testDir, "edit-test.txt");
|
||||
const originalContent = "Hello, world!";
|
||||
writeFileSync(testFile, originalContent);
|
||||
|
||||
const result = await editTool.execute("test-call-6", {
|
||||
path: testFile,
|
||||
oldText: "nonexistent",
|
||||
newText: "testing",
|
||||
});
|
||||
|
||||
expect(result.output).toContain("Could not find the exact text");
|
||||
});
|
||||
|
||||
it("should fail if text appears multiple times", async () => {
|
||||
const testFile = join(testDir, "edit-test.txt");
|
||||
const originalContent = "foo foo foo";
|
||||
writeFileSync(testFile, originalContent);
|
||||
|
||||
const result = await editTool.execute("test-call-7", {
|
||||
path: testFile,
|
||||
oldText: "foo",
|
||||
newText: "bar",
|
||||
});
|
||||
|
||||
expect(result.output).toContain("Found 3 occurrences");
|
||||
});
|
||||
});
|
||||
|
||||
describe("bash tool", () => {
|
||||
it("should execute simple commands", async () => {
|
||||
const result = await bashTool.execute("test-call-8", { command: "echo 'test output'" });
|
||||
|
||||
expect(result.output).toContain("test output");
|
||||
expect(result.details).toBeUndefined();
|
||||
});
|
||||
|
||||
it("should handle command errors", async () => {
|
||||
const result = await bashTool.execute("test-call-9", { command: "exit 1" });
|
||||
|
||||
expect(result.output).toContain("Command failed");
|
||||
});
|
||||
|
||||
it("should respect timeout", async () => {
|
||||
const result = await bashTool.execute("test-call-10", { command: "sleep 35" });
|
||||
|
||||
expect(result.output).toContain("Command failed");
|
||||
}, 35000);
|
||||
});
|
||||
});
|
||||
9
packages/coding-agent/tsconfig.build.json
Normal file
9
packages/coding-agent/tsconfig.build.json
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
{
|
||||
"extends": "../../tsconfig.base.json",
|
||||
"compilerOptions": {
|
||||
"outDir": "./dist",
|
||||
"rootDir": "./src"
|
||||
},
|
||||
"include": ["src/**/*.ts"],
|
||||
"exclude": ["node_modules", "dist", "**/*.d.ts", "src/**/*.d.ts"]
|
||||
}
|
||||
9
packages/coding-agent/vitest.config.ts
Normal file
9
packages/coding-agent/vitest.config.ts
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
import { defineConfig } from 'vitest/config';
|
||||
|
||||
export default defineConfig({
|
||||
test: {
|
||||
globals: true,
|
||||
environment: 'node',
|
||||
testTimeout: 30000, // 30 seconds for API calls
|
||||
}
|
||||
});
|
||||
|
|
@ -34,7 +34,7 @@
|
|||
"node": ">=20.0.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"@mariozechner/pi-agent": "^0.5.44",
|
||||
"@mariozechner/pi-agent-old": "^0.5.44",
|
||||
"chalk": "^5.5.0"
|
||||
},
|
||||
"devDependencies": {}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
import { main as agentMain } from "@mariozechner/pi-agent";
|
||||
import { main as agentMain } from "@mariozechner/pi-agent-old";
|
||||
import chalk from "chalk";
|
||||
import { getActivePod, loadConfig } from "../config.js";
|
||||
|
||||
|
|
|
|||
|
|
@ -9,6 +9,7 @@ import {
|
|||
type AppMessage,
|
||||
AppStorage,
|
||||
ChatPanel,
|
||||
createJavaScriptReplTool,
|
||||
IndexedDBStorageBackend,
|
||||
// PersistentStorageDialog, // TODO: Fix - currently broken
|
||||
ProviderKeysStore,
|
||||
|
|
@ -197,6 +198,12 @@ Feel free to use these tools when needed to provide accurate and helpful respons
|
|||
await chatPanel.setAgent(agent, {
|
||||
onApiKeyRequired: async (provider: string) => {
|
||||
return await ApiKeyPromptDialog.prompt(provider);
|
||||
},
|
||||
toolsFactory: (agent, agentInterface, artifactsPanel, runtimeProvidersFactory) => {
|
||||
// Create javascript_repl tool with access to attachments + artifacts
|
||||
const replTool = createJavaScriptReplTool();
|
||||
replTool.runtimeProvidersFactory = runtimeProvidersFactory;
|
||||
return [replTool];
|
||||
}
|
||||
});
|
||||
};
|
||||
|
|
|
|||
|
|
@ -9,7 +9,6 @@ import { ArtifactsRuntimeProvider } from "./components/sandbox/ArtifactsRuntimeP
|
|||
import { AttachmentsRuntimeProvider } from "./components/sandbox/AttachmentsRuntimeProvider.js";
|
||||
import type { SandboxRuntimeProvider } from "./components/sandbox/SandboxRuntimeProvider.js";
|
||||
import { ArtifactsPanel, ArtifactsToolRenderer } from "./tools/artifacts/index.js";
|
||||
import { createJavaScriptReplTool } from "./tools/javascript-repl.js";
|
||||
import { registerToolRenderer } from "./tools/renderer-registry.js";
|
||||
import type { Attachment } from "./utils/attachment-utils.js";
|
||||
import { i18n } from "./utils/i18n.js";
|
||||
|
|
@ -65,6 +64,7 @@ export class ChatPanel extends LitElement {
|
|||
agent: Agent,
|
||||
agentInterface: AgentInterface,
|
||||
artifactsPanel: ArtifactsPanel,
|
||||
runtimeProvidersFactory: () => SandboxRuntimeProvider[],
|
||||
) => AgentTool<any>[];
|
||||
},
|
||||
) {
|
||||
|
|
@ -80,12 +80,6 @@ export class ChatPanel extends LitElement {
|
|||
this.agentInterface.onApiKeyRequired = config?.onApiKeyRequired;
|
||||
this.agentInterface.onBeforeSend = config?.onBeforeSend;
|
||||
|
||||
// Create JavaScript REPL tool
|
||||
const javascriptReplTool = createJavaScriptReplTool();
|
||||
if (config?.sandboxUrlProvider) {
|
||||
javascriptReplTool.sandboxUrlProvider = config.sandboxUrlProvider;
|
||||
}
|
||||
|
||||
// Set up artifacts panel
|
||||
this.artifactsPanel = new ArtifactsPanel();
|
||||
if (config?.sandboxUrlProvider) {
|
||||
|
|
@ -94,7 +88,7 @@ export class ChatPanel extends LitElement {
|
|||
// Register the standalone tool renderer (not the panel itself)
|
||||
registerToolRenderer("artifacts", new ArtifactsToolRenderer(this.artifactsPanel));
|
||||
|
||||
// Runtime providers factory
|
||||
// Runtime providers factory for attachments + artifacts access
|
||||
const runtimeProvidersFactory = () => {
|
||||
const attachments: Attachment[] = [];
|
||||
for (const message of this.agent!.state.messages) {
|
||||
|
|
@ -116,7 +110,6 @@ export class ChatPanel extends LitElement {
|
|||
|
||||
return providers;
|
||||
};
|
||||
javascriptReplTool.runtimeProvidersFactory = runtimeProvidersFactory;
|
||||
this.artifactsPanel.runtimeProvidersFactory = runtimeProvidersFactory;
|
||||
|
||||
this.artifactsPanel.onArtifactsChange = () => {
|
||||
|
|
@ -141,8 +134,10 @@ export class ChatPanel extends LitElement {
|
|||
};
|
||||
|
||||
// Set tools on the agent
|
||||
const additionalTools = config?.toolsFactory?.(agent, this.agentInterface, this.artifactsPanel) || [];
|
||||
const tools = [javascriptReplTool, this.artifactsPanel.tool, ...additionalTools];
|
||||
// Pass runtimeProvidersFactory so consumers can configure their own REPL tools
|
||||
const additionalTools =
|
||||
config?.toolsFactory?.(agent, this.agentInterface, this.artifactsPanel, runtimeProvidersFactory) || [];
|
||||
const tools = [this.artifactsPanel.tool, ...additionalTools];
|
||||
this.agent.setTools(tools);
|
||||
|
||||
// Reconstruct artifacts from existing messages
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
import { ARTIFACTS_RUNTIME_PROVIDER_DESCRIPTION } from "../../prompts/tool-prompts.js";
|
||||
import { ARTIFACTS_RUNTIME_PROVIDER_DESCRIPTION } from "../../prompts/prompts.js";
|
||||
import type { SandboxRuntimeProvider } from "./SandboxRuntimeProvider.js";
|
||||
|
||||
// Define minimal interface for ArtifactsPanel to avoid circular dependencies
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
import { ATTACHMENTS_RUNTIME_DESCRIPTION } from "../../prompts/tool-prompts.js";
|
||||
import { ATTACHMENTS_RUNTIME_DESCRIPTION } from "../../prompts/prompts.js";
|
||||
import type { Attachment } from "../../utils/attachment-utils.js";
|
||||
import type { SandboxRuntimeProvider } from "./SandboxRuntimeProvider.js";
|
||||
|
||||
|
|
|
|||
|
|
@ -23,6 +23,10 @@ export class ConsoleRuntimeProvider implements SandboxRuntimeProvider {
|
|||
return {};
|
||||
}
|
||||
|
||||
getDescription(): string {
|
||||
return "";
|
||||
}
|
||||
|
||||
getRuntime(): (sandboxId: string) => void {
|
||||
return (_sandboxId: string) => {
|
||||
// Store truly original console methods on first wrap only
|
||||
|
|
|
|||
|
|
@ -31,5 +31,5 @@ export interface SandboxRuntimeProvider {
|
|||
* Optional documentation describing what globals/functions this provider injects.
|
||||
* This will be appended to tool descriptions dynamically so the LLM knows what's available.
|
||||
*/
|
||||
getDescription?(): string;
|
||||
getDescription(): string;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -56,8 +56,8 @@ export { ApiKeysTab, ProxyTab, SettingsDialog, SettingsTab } from "./dialogs/Set
|
|||
// Prompts
|
||||
export {
|
||||
ARTIFACTS_RUNTIME_PROVIDER_DESCRIPTION,
|
||||
DOWNLOADABLE_FILE_RUNTIME_DESCRIPTION,
|
||||
} from "./prompts/tool-prompts.js";
|
||||
ATTACHMENTS_RUNTIME_DESCRIPTION,
|
||||
} from "./prompts/prompts.js";
|
||||
// Storage
|
||||
export { AppStorage, getAppStorage, setAppStorage } from "./storage/app-storage.js";
|
||||
export { IndexedDBStorageBackend } from "./storage/backends/indexeddb-storage-backend.js";
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@
|
|||
// JavaScript REPL Tool
|
||||
// ============================================================================
|
||||
|
||||
export const JAVASCRIPT_REPL_DESCRIPTION = `# JavaScript REPL
|
||||
export const JAVASCRIPT_REPL_TOOL_DESCRIPTION = (runtimeProviderDescriptions: string[]) => `# JavaScript REPL
|
||||
|
||||
## Purpose
|
||||
Execute JavaScript code in a sandboxed browser environment with full Web APIs.
|
||||
|
|
@ -16,7 +16,7 @@ Execute JavaScript code in a sandboxed browser environment with full Web APIs.
|
|||
- Quick calculations or data transformations
|
||||
- Testing JavaScript code snippets in isolation
|
||||
- Processing data with libraries (XLSX, CSV, etc.)
|
||||
- Creating visualizations (charts, graphs)
|
||||
- Creating artifacts from data
|
||||
|
||||
## Environment
|
||||
- ES2023+ JavaScript (async/await, optional chaining, nullish coalescing, etc.)
|
||||
|
|
@ -54,13 +54,21 @@ console.log('Sum:', sum, 'Average:', avg);
|
|||
## Important Notes
|
||||
- Graphics: Use fixed dimensions (800x600), NOT window.innerWidth/Height
|
||||
- Chart.js: Set options: { responsive: false, animation: false }
|
||||
- Three.js: renderer.setSize(800, 600) with matching aspect ratio`;
|
||||
- Three.js: renderer.setSize(800, 600) with matching aspect ratio
|
||||
|
||||
## Library functions
|
||||
You can use the following functions in your code:
|
||||
|
||||
${runtimeProviderDescriptions.join("\n\n")}
|
||||
`;
|
||||
|
||||
// ============================================================================
|
||||
// Artifacts Tool
|
||||
// ============================================================================
|
||||
|
||||
export const ARTIFACTS_BASE_DESCRIPTION = `Creates and manages file artifacts. Each artifact is a file with a filename and content.
|
||||
export const ARTIFACTS_TOOL_DESCRIPTION = (
|
||||
runtimeProviderDescriptions: string[],
|
||||
) => `Creates and manages file artifacts. Each artifact is a file with a filename and content.
|
||||
|
||||
CRITICAL - ARTIFACT UPDATE WORKFLOW:
|
||||
1. Creating new file? → Use 'create'
|
||||
|
|
@ -104,33 +112,8 @@ Commands:
|
|||
ANTI-PATTERNS TO AVOID:
|
||||
❌ Using 'get' + modifying content + 'rewrite' to change one section
|
||||
❌ Using createOrUpdateArtifact() in code for manual edits YOU make
|
||||
✅ Use 'update' command for surgical, targeted modifications`;
|
||||
✅ Use 'update' command for surgical, targeted modifications
|
||||
|
||||
export const ARTIFACTS_RUNTIME_EXAMPLE = `- Example HTML artifact that processes a CSV attachment:
|
||||
<script>
|
||||
// List available files
|
||||
const files = listAttachments();
|
||||
console.log('Available files:', files);
|
||||
|
||||
// Find CSV file
|
||||
const csvFile = files.find(f => f.mimeType === 'text/csv');
|
||||
if (csvFile) {
|
||||
const csvContent = readTextAttachment(csvFile.id);
|
||||
// Process CSV data...
|
||||
}
|
||||
|
||||
// Display image
|
||||
const imageFile = files.find(f => f.mimeType.startsWith('image/'));
|
||||
if (imageFile) {
|
||||
const bytes = readBinaryAttachment(imageFile.id);
|
||||
const blob = new Blob([bytes], {type: imageFile.mimeType});
|
||||
const url = URL.createObjectURL(blob);
|
||||
document.body.innerHTML = '<img src="' + url + '">';
|
||||
}
|
||||
</script>
|
||||
`;
|
||||
|
||||
export const ARTIFACTS_HTML_SECTION = `
|
||||
For text/html artifacts:
|
||||
- Must be a single self-contained file
|
||||
- External scripts: Use CDNs like https://esm.sh, https://unpkg.com, or https://cdnjs.cloudflare.com
|
||||
|
|
@ -166,40 +149,33 @@ CRITICAL REMINDER FOR ALL ARTIFACTS:
|
|||
- Prefer to update existing files rather than creating new ones
|
||||
- Keep filenames consistent and descriptive
|
||||
- Use appropriate file extensions
|
||||
- Ensure HTML artifacts have a defined background color`;
|
||||
- Ensure HTML artifacts have a defined background color
|
||||
|
||||
/**
|
||||
* Build complete artifacts description with optional provider docs.
|
||||
*/
|
||||
export function buildArtifactsDescription(providerDocs?: string): string {
|
||||
const runtimeSection = providerDocs
|
||||
? `
|
||||
The following functions are available inside your code in HTML artifacts:
|
||||
|
||||
For text/html artifacts with runtime capabilities:${providerDocs}
|
||||
${ARTIFACTS_RUNTIME_EXAMPLE}
|
||||
`
|
||||
: "";
|
||||
|
||||
return ARTIFACTS_BASE_DESCRIPTION + runtimeSection + ARTIFACTS_HTML_SECTION;
|
||||
}
|
||||
${runtimeProviderDescriptions.join("\n\n")}
|
||||
`;
|
||||
|
||||
// ============================================================================
|
||||
// Artifacts Runtime Provider
|
||||
// ============================================================================
|
||||
|
||||
export const ARTIFACTS_RUNTIME_PROVIDER_DESCRIPTION = `
|
||||
Artifact Management from within executed code (HTML/JavaScript REPL).
|
||||
### Artifacts
|
||||
|
||||
WHEN TO USE THESE FUNCTIONS:
|
||||
Programmatically create, read, update, and delete artifact files from your code.
|
||||
|
||||
#### When to Use
|
||||
- Persist data or state between REPL calls
|
||||
- ONLY when writing code that programmatically generates/transforms data
|
||||
- Examples: Web scraping results, processed CSV data, generated charts saved as JSON
|
||||
- The artifact content is CREATED BY THE CODE, not by you directly
|
||||
|
||||
DO NOT USE THESE FUNCTIONS FOR:
|
||||
#### Do NOT Use For
|
||||
- Summaries or notes YOU write (use artifacts tool instead)
|
||||
- Content YOU author directly (use artifacts tool instead)
|
||||
|
||||
Functions:
|
||||
#### Functions
|
||||
- await listArtifacts() - Get list of all artifact filenames, returns string[]
|
||||
* Example: const files = await listArtifacts(); // ['data.json', 'notes.md']
|
||||
|
||||
|
|
@ -216,39 +192,75 @@ Functions:
|
|||
- await deleteArtifact(filename) - Delete an artifact
|
||||
* Example: await deleteArtifact('temp.json')
|
||||
|
||||
Example - Scraping data and saving it:
|
||||
const response = await fetch('https://api.example.com/data');
|
||||
const data = await response.json();
|
||||
await createOrUpdateArtifact('api-results.json', data);
|
||||
#### Example
|
||||
Scraping data and saving it:
|
||||
\`\`\`javascript
|
||||
const response = await fetch('https://api.example.com/data');
|
||||
const data = await response.json();
|
||||
await createOrUpdateArtifact('api-results.json', data);
|
||||
\`\`\`
|
||||
|
||||
Binary data must be converted to a base64 string before passing to createOrUpdateArtifact.
|
||||
Example:
|
||||
const blob = await new Promise(resolve => canvas.toBlob(resolve, 'image/png'));
|
||||
const arrayBuffer = await blob.arrayBuffer();
|
||||
const base64 = btoa(String.fromCharCode(...new Uint8Array(arrayBuffer)));
|
||||
await createOrUpdateArtifact('image.png', base64);
|
||||
Binary data (convert to base64 first):
|
||||
\`\`\`javascript
|
||||
const blob = await new Promise(resolve => canvas.toBlob(resolve, 'image/png'));
|
||||
const arrayBuffer = await blob.arrayBuffer();
|
||||
const base64 = btoa(String.fromCharCode(...new Uint8Array(arrayBuffer)));
|
||||
await createOrUpdateArtifact('image.png', base64);
|
||||
\`\`\`
|
||||
`;
|
||||
|
||||
// ============================================================================
|
||||
// Downloadable File Runtime Provider
|
||||
// Attachments Runtime Provider
|
||||
// ============================================================================
|
||||
|
||||
export const DOWNLOADABLE_FILE_RUNTIME_DESCRIPTION = `
|
||||
Downloadable Files (one-time downloads for the user - YOU cannot read these back):
|
||||
- await returnDownloadableFile(filename, content, mimeType?) - Create downloadable file (async!)
|
||||
* Use for: Processed/transformed data, generated images, analysis results
|
||||
* Important: This creates a download for the user. You will NOT be able to access this file's content later.
|
||||
* If you need to access the data later, use createArtifact() instead (if available).
|
||||
* Always use await with returnDownloadableFile
|
||||
* REQUIRED: For Blob/Uint8Array binary content, you MUST supply a proper MIME type (e.g., "image/png").
|
||||
If omitted, throws an Error with stack trace pointing to the offending line.
|
||||
* Strings without a MIME default to text/plain.
|
||||
* Objects are auto-JSON stringified and default to application/json unless a MIME is provided.
|
||||
* Canvas images: Use toBlob() with await Promise wrapper
|
||||
* Examples:
|
||||
- await returnDownloadableFile('cleaned-data.csv', csvString, 'text/csv')
|
||||
- await returnDownloadableFile('analysis.json', {results: [...]}, 'application/json')
|
||||
- await returnDownloadableFile('chart.png', blob, 'image/png')`;
|
||||
export const ATTACHMENTS_RUNTIME_DESCRIPTION = `
|
||||
### User Attachments
|
||||
|
||||
Read files that the user has uploaded to the conversation.
|
||||
|
||||
#### When to Use
|
||||
- When you need to read or process files the user has uploaded to the conversation
|
||||
- Examples: CSV data files, JSON datasets, Excel spreadsheets, images, PDFs
|
||||
|
||||
#### Do NOT Use For
|
||||
- Creating new files (use createOrUpdateArtifact instead)
|
||||
- Modifying existing files (read first, then create artifact with modified version)
|
||||
|
||||
#### Functions
|
||||
- listAttachments() - List all attachments, returns array of {id, fileName, mimeType, size}
|
||||
* Example: const files = listAttachments(); // [{id: '...', fileName: 'data.xlsx', mimeType: '...', size: 12345}]
|
||||
|
||||
- readTextAttachment(attachmentId) - Read attachment as text, returns string
|
||||
* Use for: CSV, JSON, TXT, XML, and other text-based files
|
||||
* Example: const csvContent = readTextAttachment(files[0].id);
|
||||
* Example: const json = JSON.parse(readTextAttachment(jsonFile.id));
|
||||
|
||||
- readBinaryAttachment(attachmentId) - Read attachment as binary data, returns Uint8Array
|
||||
* Use for: Excel (.xlsx), images, PDFs, and other binary files
|
||||
* Example: const xlsxBytes = readBinaryAttachment(files[0].id);
|
||||
* Example: const XLSX = await import('https://esm.run/xlsx'); const workbook = XLSX.read(xlsxBytes);
|
||||
|
||||
#### Example
|
||||
Processing CSV attachment:
|
||||
\`\`\`javascript
|
||||
const files = listAttachments();
|
||||
const csvFile = files.find(f => f.fileName.endsWith('.csv'));
|
||||
const csvData = readTextAttachment(csvFile.id);
|
||||
const rows = csvData.split('\\n').map(row => row.split(','));
|
||||
console.log(\`Found \${rows.length} rows\`);
|
||||
\`\`\`
|
||||
|
||||
Processing Excel attachment:
|
||||
\`\`\`javascript
|
||||
const XLSX = await import('https://esm.run/xlsx');
|
||||
const files = listAttachments();
|
||||
const excelFile = files.find(f => f.fileName.endsWith('.xlsx'));
|
||||
const bytes = readBinaryAttachment(excelFile.id);
|
||||
const workbook = XLSX.read(bytes);
|
||||
const firstSheet = workbook.Sheets[workbook.SheetNames[0]];
|
||||
const jsonData = XLSX.utils.sheet_to_json(firstSheet);
|
||||
\`\`\`
|
||||
`;
|
||||
|
||||
// ============================================================================
|
||||
// Extract Document Tool
|
||||
|
|
@ -273,27 +285,3 @@ Structured plain text with page/sheet/slide delimiters in XML-like format:
|
|||
- Maximum file size: 50MB
|
||||
- CORS restrictions may block some URLs - if this happens, the error will guide you to help the user configure a CORS proxy
|
||||
- Format is automatically detected from file extension and Content-Type header`;
|
||||
|
||||
// ============================================================================
|
||||
// Attachments Runtime Provider
|
||||
// ============================================================================
|
||||
|
||||
export const ATTACHMENTS_RUNTIME_DESCRIPTION = `
|
||||
User Attachments (files the user added to the conversation):
|
||||
- listAttachments() - List all attachments, returns array of {id, fileName, mimeType, size}
|
||||
* Example: const files = listAttachments(); // [{id: '...', fileName: 'data.xlsx', mimeType: '...', size: 12345}]
|
||||
- readTextAttachment(attachmentId) - Read attachment as text, returns string
|
||||
* Use for: CSV, JSON, TXT, XML, and other text-based files
|
||||
* Example: const csvContent = readTextAttachment(files[0].id);
|
||||
* Example: const json = JSON.parse(readTextAttachment(jsonFile.id));
|
||||
- readBinaryAttachment(attachmentId) - Read attachment as binary data, returns Uint8Array
|
||||
* Use for: Excel (.xlsx), images, PDFs, and other binary files
|
||||
* Example: const xlsxBytes = readBinaryAttachment(files[0].id);
|
||||
* Example: const XLSX = await import('https://esm.run/xlsx'); const workbook = XLSX.read(xlsxBytes);
|
||||
|
||||
Common pattern - Process attachment and create download:
|
||||
const files = listAttachments();
|
||||
const csvFile = files.find(f => f.fileName.endsWith('.csv'));
|
||||
const csvData = readTextAttachment(csvFile.id);
|
||||
// Process csvData...
|
||||
await returnDownloadableFile('processed-' + csvFile.fileName, processedData, 'text/csv');`;
|
||||
|
|
@ -8,7 +8,7 @@ import { createRef, type Ref, ref } from "lit/directives/ref.js";
|
|||
import { X } from "lucide";
|
||||
import type { ArtifactMessage } from "../../components/Messages.js";
|
||||
import type { SandboxRuntimeProvider } from "../../components/sandbox/SandboxRuntimeProvider.js";
|
||||
import { buildArtifactsDescription } from "../../prompts/tool-prompts.js";
|
||||
import { ARTIFACTS_TOOL_DESCRIPTION } from "../../prompts/prompts.js";
|
||||
import { i18n } from "../../utils/i18n.js";
|
||||
import type { ArtifactElement } from "./ArtifactElement.js";
|
||||
import { DocxArtifact } from "./DocxArtifact.js";
|
||||
|
|
@ -245,14 +245,12 @@ export class ArtifactsPanel extends LitElement {
|
|||
label: "Artifacts",
|
||||
name: "artifacts",
|
||||
get description() {
|
||||
// Get dynamic provider descriptions
|
||||
const providers = self.runtimeProvidersFactory?.() || [];
|
||||
const providerDocs = providers
|
||||
.map((p) => p.getDescription?.())
|
||||
.filter(Boolean)
|
||||
.join("\n");
|
||||
|
||||
return buildArtifactsDescription(providerDocs || undefined);
|
||||
const runtimeProviderDescriptions =
|
||||
self
|
||||
.runtimeProvidersFactory?.()
|
||||
.map((d) => d.getDescription())
|
||||
.filter((d) => d.trim().length > 0) || [];
|
||||
return ARTIFACTS_TOOL_DESCRIPTION(runtimeProviderDescriptions);
|
||||
},
|
||||
parameters: artifactsParamsSchema,
|
||||
// Execute mutates our local store and returns a plain output
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ import type { AgentTool, ToolResultMessage } from "@mariozechner/pi-ai";
|
|||
import { type Static, Type } from "@sinclair/typebox";
|
||||
import { createRef, ref } from "lit/directives/ref.js";
|
||||
import { FileText } from "lucide";
|
||||
import { EXTRACT_DOCUMENT_DESCRIPTION } from "../prompts/tool-prompts.js";
|
||||
import { EXTRACT_DOCUMENT_DESCRIPTION } from "../prompts/prompts.js";
|
||||
import { loadAttachment } from "../utils/attachment-utils.js";
|
||||
import { registerToolRenderer, renderCollapsibleHeader, renderHeader } from "./renderer-registry.js";
|
||||
import type { ToolRenderer, ToolRenderResult } from "./types.js";
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ import { createRef, ref } from "lit/directives/ref.js";
|
|||
import { Code } from "lucide";
|
||||
import { type SandboxFile, SandboxIframe, type SandboxResult } from "../components/SandboxedIframe.js";
|
||||
import type { SandboxRuntimeProvider } from "../components/sandbox/SandboxRuntimeProvider.js";
|
||||
import { JAVASCRIPT_REPL_DESCRIPTION } from "../prompts/tool-prompts.js";
|
||||
import { JAVASCRIPT_REPL_TOOL_DESCRIPTION } from "../prompts/prompts.js";
|
||||
import type { Attachment } from "../utils/attachment-utils.js";
|
||||
import { registerToolRenderer, renderCollapsibleHeader, renderHeader } from "./renderer-registry.js";
|
||||
import type { ToolRenderer, ToolRenderResult } from "./types.js";
|
||||
|
|
@ -132,7 +132,13 @@ export function createJavaScriptReplTool(): AgentTool<typeof javascriptReplSchem
|
|||
name: "javascript_repl",
|
||||
runtimeProvidersFactory: () => [], // default to empty array
|
||||
sandboxUrlProvider: undefined, // optional, for browser extensions
|
||||
description: JAVASCRIPT_REPL_DESCRIPTION,
|
||||
get description() {
|
||||
const runtimeProviderDescriptions =
|
||||
this.runtimeProvidersFactory?.()
|
||||
.map((d) => d.getDescription())
|
||||
.filter((d) => d.trim().length > 0) || [];
|
||||
return JAVASCRIPT_REPL_TOOL_DESCRIPTION(runtimeProviderDescriptions);
|
||||
},
|
||||
parameters: javascriptReplSchema,
|
||||
execute: async function (_toolCallId: string, args: Static<typeof javascriptReplSchema>, signal?: AbortSignal) {
|
||||
const result = await executeJavaScript(
|
||||
|
|
|
|||
|
|
@ -7,6 +7,8 @@
|
|||
"@mariozechner/pi-ai": ["./packages/ai/src/index.ts"],
|
||||
"@mariozechner/pi-web-ui": ["./packages/web-ui/src/index.ts"],
|
||||
"@mariozechner/pi-agent": ["./packages/agent/src/index.ts"],
|
||||
"@mariozechner/pi-agent-old": ["./packages/agent-old/src/index.ts"],
|
||||
"@mariozechner/coding-agent": ["./packages/coding-agent/src/index.ts"],
|
||||
"@mariozechner/pi": ["./packages/pods/src/index.ts"]
|
||||
}
|
||||
},
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue