mirror of
https://github.com/getcompanion-ai/co-mono.git
synced 2026-04-15 22:03:45 +00:00
Fix slash commands and hook commands during streaming
- Hook commands now execute immediately during streaming (they manage their own LLM interaction via pi.sendMessage())
- File-based slash commands are expanded and queued via steer/followUp during streaming
- prompt() accepts new streamingBehavior option ('steer' or 'followUp') for explicit queueing during streaming
- steer() and followUp() now expand file-based slash commands and error on hook commands
- RPC prompt command accepts optional streamingBehavior field
- Updated docs: rpc.md, sdk.md, CHANGELOG.md
fixes #420
This commit is contained in:
parent
308c0e0ec0
commit
e9cf3c1835
7 changed files with 207 additions and 52 deletions
|
|
@ -41,6 +41,21 @@ With images:
|
|||
{"type": "prompt", "message": "What's in this image?", "images": [{"type": "image", "source": {"type": "base64", "mediaType": "image/png", "data": "..."}}]}
|
||||
```
|
||||
|
||||
**During streaming**: If the agent is already streaming, you must specify `streamingBehavior` to queue the message:
|
||||
|
||||
```json
|
||||
{"type": "prompt", "message": "New instruction", "streamingBehavior": "steer"}
|
||||
```
|
||||
|
||||
- `"steer"`: Interrupt the agent mid-run. Message is delivered after current tool execution, remaining tools are skipped.
|
||||
- `"followUp"`: Wait until the agent finishes. Message is delivered only when agent stops.
|
||||
|
||||
If the agent is streaming and no `streamingBehavior` is specified, the command returns an error.
|
||||
|
||||
**Hook commands**: If the message is a hook command (e.g., `/mycommand`), it executes immediately even during streaming. Hook commands manage their own LLM interaction via `pi.sendMessage()`.
|
||||
|
||||
**Slash commands**: File-based slash commands (from `.md` files) are expanded before sending/queueing.
|
||||
|
||||
Response:
|
||||
```json
|
||||
{"id": "req-1", "type": "response", "command": "prompt", "success": true}
|
||||
|
|
@ -48,20 +63,35 @@ Response:
|
|||
|
||||
The `images` field is optional. Each image uses `ImageContent` format with base64 or URL source.
|
||||
|
||||
#### queue_message
|
||||
#### steer
|
||||
|
||||
Queue a message to be injected at the next agent turn. Queued messages are added to the conversation without triggering a new prompt. Useful for injecting context mid-conversation.
|
||||
Queue a steering message to interrupt the agent mid-run. Delivered after current tool execution, remaining tools are skipped. File-based slash commands are expanded. Hook commands are not allowed (use `prompt` instead).
|
||||
|
||||
```json
|
||||
{"type": "queue_message", "message": "Additional context"}
|
||||
{"type": "steer", "message": "Stop and do this instead"}
|
||||
```
|
||||
|
||||
Response:
|
||||
```json
|
||||
{"type": "response", "command": "queue_message", "success": true}
|
||||
{"type": "response", "command": "steer", "success": true}
|
||||
```
|
||||
|
||||
See [set_queue_mode](#set_queue_mode) for controlling how queued messages are processed.
|
||||
See [set_steering_mode](#set_steering_mode) for controlling how steering messages are processed.
|
||||
|
||||
#### follow_up
|
||||
|
||||
Queue a follow-up message to be processed after the agent finishes. Delivered only when agent has no more tool calls or steering messages. File-based slash commands are expanded. Hook commands are not allowed (use `prompt` instead).
|
||||
|
||||
```json
|
||||
{"type": "follow_up", "message": "After you're done, also do this"}
|
||||
```
|
||||
|
||||
Response:
|
||||
```json
|
||||
{"type": "response", "command": "follow_up", "success": true}
|
||||
```
|
||||
|
||||
See [set_follow_up_mode](#set_follow_up_mode) for controlling how follow-up messages are processed.
|
||||
|
||||
#### abort
|
||||
|
||||
|
|
@ -120,12 +150,13 @@ Response:
|
|||
"thinkingLevel": "medium",
|
||||
"isStreaming": false,
|
||||
"isCompacting": false,
|
||||
"queueMode": "all",
|
||||
"steeringMode": "all",
|
||||
"followUpMode": "one-at-a-time",
|
||||
"sessionFile": "/path/to/session.jsonl",
|
||||
"sessionId": "abc123",
|
||||
"autoCompactionEnabled": true,
|
||||
"messageCount": 5,
|
||||
"queuedMessageCount": 0
|
||||
"pendingMessageCount": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
|
@ -253,23 +284,40 @@ Response:
|
|||
}
|
||||
```
|
||||
|
||||
### Queue Mode
|
||||
### Queue Modes
|
||||
|
||||
#### set_queue_mode
|
||||
#### set_steering_mode
|
||||
|
||||
Control how queued messages (from `queue_message`) are injected into the conversation.
|
||||
Control how steering messages (from `steer`) are delivered.
|
||||
|
||||
```json
|
||||
{"type": "set_queue_mode", "mode": "one-at-a-time"}
|
||||
{"type": "set_steering_mode", "mode": "one-at-a-time"}
|
||||
```
|
||||
|
||||
Modes:
|
||||
- `"all"`: Inject all queued messages at the next turn
|
||||
- `"one-at-a-time"`: Inject one queued message per turn (default)
|
||||
- `"all"`: Deliver all steering messages at the next interruption point
|
||||
- `"one-at-a-time"`: Deliver one steering message per interruption (default)
|
||||
|
||||
Response:
|
||||
```json
|
||||
{"type": "response", "command": "set_queue_mode", "success": true}
|
||||
{"type": "response", "command": "set_steering_mode", "success": true}
|
||||
```
|
||||
|
||||
#### set_follow_up_mode
|
||||
|
||||
Control how follow-up messages (from `follow_up`) are delivered.
|
||||
|
||||
```json
|
||||
{"type": "set_follow_up_mode", "mode": "one-at-a-time"}
|
||||
```
|
||||
|
||||
Modes:
|
||||
- `"all"`: Deliver all follow-up messages when agent finishes
|
||||
- `"one-at-a-time"`: Deliver one follow-up message per agent completion (default)
|
||||
|
||||
Response:
|
||||
```json
|
||||
{"type": "response", "command": "set_follow_up_mode", "success": true}
|
||||
```
|
||||
|
||||
### Compaction
|
||||
|
|
|
|||
|
|
@ -77,8 +77,13 @@ The session manages the agent lifecycle, message history, and event streaming.
|
|||
```typescript
|
||||
interface AgentSession {
|
||||
// Send a prompt and wait for completion
|
||||
// If streaming, requires streamingBehavior option to queue the message
|
||||
prompt(text: string, options?: PromptOptions): Promise<void>;
|
||||
|
||||
// Queue messages during streaming
|
||||
steer(text: string): Promise<void>; // Interrupt: delivered after current tool, skips remaining
|
||||
followUp(text: string): Promise<void>; // Wait: delivered only when agent finishes
|
||||
|
||||
// Subscribe to events (returns unsubscribe function)
|
||||
subscribe(listener: (event: AgentSessionEvent) => void): () => void;
|
||||
|
||||
|
|
@ -122,6 +127,41 @@ interface AgentSession {
|
|||
}
|
||||
```
|
||||
|
||||
### Prompting and Message Queueing
|
||||
|
||||
The `prompt()` method handles slash commands, hook commands, and message sending:
|
||||
|
||||
```typescript
|
||||
// Basic prompt (when not streaming)
|
||||
await session.prompt("What files are here?");
|
||||
|
||||
// With images
|
||||
await session.prompt("What's in this image?", {
|
||||
images: [{ type: "image", source: { type: "base64", mediaType: "image/png", data: "..." } }]
|
||||
});
|
||||
|
||||
// During streaming: must specify how to queue the message
|
||||
await session.prompt("Stop and do this instead", { streamingBehavior: "steer" });
|
||||
await session.prompt("After you're done, also check X", { streamingBehavior: "followUp" });
|
||||
```
|
||||
|
||||
**Behavior:**
|
||||
- **Hook commands** (e.g., `/mycommand`): Execute immediately, even during streaming. They manage their own LLM interaction via `pi.sendMessage()`.
|
||||
- **File-based slash commands** (from `.md` files): Expanded to their content before sending/queueing.
|
||||
- **During streaming without `streamingBehavior`**: Throws an error. Use `steer()` or `followUp()` directly, or specify the option.
|
||||
|
||||
For explicit queueing during streaming:
|
||||
|
||||
```typescript
|
||||
// Interrupt the agent (delivered after current tool, skips remaining tools)
|
||||
await session.steer("New instruction");
|
||||
|
||||
// Wait for agent to finish (delivered only when agent stops)
|
||||
await session.followUp("After you're done, also do this");
|
||||
```
|
||||
|
||||
Both `steer()` and `followUp()` expand file-based slash commands but error on hook commands (hook commands cannot be queued).
|
||||
|
||||
### Agent and AgentState
|
||||
|
||||
The `Agent` class (from `@mariozechner/pi-agent-core`) handles the core LLM interaction. Access it via `session.agent`.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue