mirror of
https://github.com/getcompanion-ai/co-mono.git
synced 2026-04-15 10:05:14 +00:00
docs: update README.md, hooks.md, and CHANGELOG for steer()/followUp() API
- Fix settings-selector descriptions to explain one-at-a-time vs all - Update README.md message queuing section, settings example, and table - Update hooks.md: hasPendingMessages, sendMessage options, triggerTurn example - Add Theme/ThemeColor export and hasPendingMessages rename to CHANGELOG
This commit is contained in:
parent
8c227052d3
commit
9f2e6ac5eb
4 changed files with 36 additions and 15 deletions
|
|
@ -15,6 +15,7 @@
|
|||
- `queuedMessageCount` → `pendingMessageCount`
|
||||
- `getQueuedMessages()` → `getSteeringMessages()` and `getFollowUpMessages()`
|
||||
- `clearQueue()` now returns `{ steering: string[], followUp: string[] }`
|
||||
- `hasQueuedMessages()` → `hasPendingMessages()`
|
||||
- **Hook API signature changed**: `pi.sendMessage()` second parameter changed from `triggerTurn?: boolean` to `options?: { triggerTurn?, deliverAs? }`. Use `deliverAs: "followUp"` for follow-up delivery. Affects both hooks and internal `sendHookMessage()` method.
|
||||
- **RPC API changes**:
|
||||
- `queue_message` command → `steer` and `follow_up` commands
|
||||
|
|
@ -25,6 +26,7 @@
|
|||
### Added
|
||||
|
||||
- Alt+Enter keybind to queue follow-up messages while agent is streaming
|
||||
- `Theme` and `ThemeColor` types now exported for hooks using `ctx.ui.custom()`
|
||||
- Terminal window title now displays "pi - dirname" to identify which project session you're in ([#407](https://github.com/badlogic/pi-mono/pull/407) by [@kaofelix](https://github.com/kaofelix))
|
||||
|
||||
### Fixed
|
||||
|
|
|
|||
|
|
@ -188,7 +188,7 @@ The agent reads, writes, and edits files, and executes commands via bash.
|
|||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `/settings` | Open settings menu (thinking, theme, queue mode, toggles) |
|
||||
| `/settings` | Open settings menu (thinking, theme, message delivery modes, toggles) |
|
||||
| `/model` | Switch models mid-session (fuzzy search, arrow keys, Enter to select) |
|
||||
| `/export [file]` | Export session to self-contained HTML |
|
||||
| `/share` | Upload session as secret GitHub gist, get shareable URL (requires `gh` CLI) |
|
||||
|
|
@ -214,7 +214,11 @@ The agent reads, writes, and edits files, and executes commands via bash.
|
|||
|
||||
**Multi-line paste:** Pasted content is collapsed to `[paste #N <lines> lines]` but sent in full.
|
||||
|
||||
**Message queuing:** Submit messages while the agent is working. They queue and process based on queue mode (configurable via `/settings`). Press Escape to abort and restore queued messages to editor.
|
||||
**Message queuing:** Submit messages while the agent is working:
|
||||
- **Enter** queues a *steering* message, delivered after current tool execution (interrupts remaining tools)
|
||||
- **Alt+Enter** queues a *follow-up* message, delivered only after the agent finishes all work
|
||||
|
||||
Both modes are configurable via `/settings`: "one-at-a-time" delivers messages one by one waiting for responses, "all" delivers all queued messages at once. Press Escape to abort and restore queued messages to editor.
|
||||
|
||||
### Keyboard Shortcuts
|
||||
|
||||
|
|
@ -499,7 +503,8 @@ Global `~/.pi/agent/settings.json` stores persistent preferences:
|
|||
"defaultModel": "claude-sonnet-4-20250514",
|
||||
"defaultThinkingLevel": "medium",
|
||||
"enabledModels": ["anthropic/*", "*gpt*", "gemini-2.5-pro:high"],
|
||||
"queueMode": "one-at-a-time",
|
||||
"steeringMode": "one-at-a-time",
|
||||
"followUpMode": "one-at-a-time",
|
||||
"shellPath": "C:\\path\\to\\bash.exe",
|
||||
"hideThinkingBlock": false,
|
||||
"collapseChangelog": false,
|
||||
|
|
@ -531,7 +536,8 @@ Global `~/.pi/agent/settings.json` stores persistent preferences:
|
|||
| `defaultModel` | Default model ID | - |
|
||||
| `defaultThinkingLevel` | Thinking level: `off`, `minimal`, `low`, `medium`, `high`, `xhigh` | - |
|
||||
| `enabledModels` | Model patterns for cycling. Supports glob patterns (`github-copilot/*`, `*sonnet*`) and fuzzy matching. Same as `--models` CLI flag | - |
|
||||
| `queueMode` | Message queue mode: `all` or `one-at-a-time` | `one-at-a-time` |
|
||||
| `steeringMode` | Steering message delivery: `all` or `one-at-a-time` | `one-at-a-time` |
|
||||
| `followUpMode` | Follow-up message delivery: `all` or `one-at-a-time` | `one-at-a-time` |
|
||||
| `shellPath` | Custom bash path (Windows) | auto-detected |
|
||||
| `hideThinkingBlock` | Hide thinking blocks in output (Ctrl+T to toggle) | `false` |
|
||||
| `collapseChangelog` | Show condensed changelog after update | `false` |
|
||||
|
|
@ -689,7 +695,13 @@ export default function (pi: HookAPI) {
|
|||
|
||||
**Sending messages from hooks:**
|
||||
|
||||
Use `pi.sendMessage(message, triggerTurn?)` to inject messages into the session. Messages are persisted as `CustomMessageEntry` and sent to the LLM. If the agent is streaming, the message is queued; otherwise a new agent loop starts if `triggerTurn` is true.
|
||||
Use `pi.sendMessage(message, options?)` to inject messages into the session. Messages are persisted as `CustomMessageEntry` and sent to the LLM.
|
||||
|
||||
Options:
|
||||
- `triggerTurn`: If true and agent is idle, starts a new agent turn. Default: false.
|
||||
- `deliverAs`: When agent is streaming, controls delivery timing:
|
||||
- `"steer"` (default): Delivered after current tool execution, interrupts remaining tools.
|
||||
- `"followUp"`: Delivered only after agent finishes all work.
|
||||
|
||||
```typescript
|
||||
import * as fs from "node:fs";
|
||||
|
|
|
|||
|
|
@ -565,13 +565,13 @@ Abort the current agent operation (fire-and-forget, does not wait):
|
|||
await ctx.abort();
|
||||
```
|
||||
|
||||
### ctx.hasQueuedMessages()
|
||||
### ctx.hasPendingMessages()
|
||||
|
||||
Check if there are messages queued (user typed while agent was streaming):
|
||||
Check if there are messages pending (user typed while agent was streaming):
|
||||
|
||||
```typescript
|
||||
if (ctx.hasQueuedMessages()) {
|
||||
// Skip interactive prompt, let queued message take over
|
||||
if (ctx.hasPendingMessages()) {
|
||||
// Skip interactive prompt, let pending messages take over
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
|
@ -638,7 +638,7 @@ const result = await ctx.navigateTree("entry-id-456", {
|
|||
|
||||
Subscribe to events. See [Events](#events) for all event types.
|
||||
|
||||
### pi.sendMessage(message, triggerTurn?)
|
||||
### pi.sendMessage(message, options?)
|
||||
|
||||
Inject a message into the session. Creates a `CustomMessageEntry` that participates in the LLM context.
|
||||
|
||||
|
|
@ -648,12 +648,17 @@ pi.sendMessage({
|
|||
content: "Message text", // string or (TextContent | ImageContent)[]
|
||||
display: true, // Show in TUI
|
||||
details: { ... }, // Optional metadata (not sent to LLM)
|
||||
}, triggerTurn); // If true, triggers LLM response
|
||||
}, {
|
||||
triggerTurn: true, // If true and agent is idle, triggers LLM response
|
||||
deliverAs: "steer", // "steer" (default) or "followUp" when agent is streaming
|
||||
});
|
||||
```
|
||||
|
||||
**Storage and timing:**
|
||||
- The message is appended to the session file immediately as a `CustomMessageEntry`
|
||||
- If the agent is currently streaming, the message is queued and appended after the current turn
|
||||
- If the agent is currently streaming:
|
||||
- `deliverAs: "steer"` (default): Delivered after current tool execution, interrupts remaining tools
|
||||
- `deliverAs: "followUp"`: Delivered only after agent finishes all work
|
||||
- If `triggerTurn` is true and the agent is idle, a new agent loop starts
|
||||
|
||||
**LLM context:**
|
||||
|
|
@ -700,7 +705,7 @@ pi.registerCommand("stats", {
|
|||
|
||||
For long-running commands (e.g., LLM calls), use `ctx.ui.custom()` with a loader. See [examples/hooks/qna.ts](../examples/hooks/qna.ts).
|
||||
|
||||
To trigger LLM after command, call `pi.sendMessage(..., true)`.
|
||||
To trigger LLM after command, call `pi.sendMessage(..., { triggerTurn: true })`.
|
||||
|
||||
### pi.registerMessageRenderer(customType, renderer)
|
||||
|
||||
|
|
|
|||
|
|
@ -131,14 +131,16 @@ export class SettingsSelectorComponent extends Container {
|
|||
{
|
||||
id: "steering-mode",
|
||||
label: "Steering mode",
|
||||
description: "How to deliver steering messages (Enter while streaming)",
|
||||
description:
|
||||
"Enter while streaming queues steering messages. 'one-at-a-time': deliver one, wait for response. 'all': deliver all at once.",
|
||||
currentValue: config.steeringMode,
|
||||
values: ["one-at-a-time", "all"],
|
||||
},
|
||||
{
|
||||
id: "follow-up-mode",
|
||||
label: "Follow-up mode",
|
||||
description: "How to deliver follow-up messages (queued until agent finishes)",
|
||||
description:
|
||||
"Alt+Enter queues follow-up messages until agent stops. 'one-at-a-time': deliver one, wait for response. 'all': deliver all at once.",
|
||||
currentValue: config.followUpMode,
|
||||
values: ["one-at-a-time", "all"],
|
||||
},
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue