mirror of
https://github.com/getcompanion-ai/co-mono.git
synced 2026-04-15 09:01:14 +00:00
This commit is contained in:
parent
ff5148e7cc
commit
6da488a5aa
2 changed files with 48 additions and 1 deletions
|
|
@ -6,6 +6,10 @@
|
|||
|
||||
- `ContextUsage.tokens` and `ContextUsage.percent` are now `number | null`. After compaction, context token count is unknown until the next LLM response, so these fields return `null`. Extensions that read `ContextUsage` must handle the `null` case. Removed `usageTokens`, `trailingTokens`, and `lastUsageIndex` fields from `ContextUsage` (implementation details that should not have been public) ([#1382](https://github.com/badlogic/pi-mono/pull/1382) by [@ferologics](https://github.com/ferologics))
|
||||
|
||||
### Added
|
||||
|
||||
- Added extension event forwarding for message and tool execution lifecycles (`message_start`, `message_update`, `message_end`, `tool_execution_start`, `tool_execution_update`, `tool_execution_end`) ([#1375](https://github.com/badlogic/pi-mono/pull/1375) by [@sumeet](https://github.com/sumeet))
|
||||
|
||||
### Fixed
|
||||
|
||||
- Fixed context usage percentage in footer showing stale pre-compaction values. After compaction the footer now shows `?/200k` until the next LLM response provides accurate usage ([#1382](https://github.com/badlogic/pi-mono/pull/1382) by [@ferologics](https://github.com/ferologics))
|
||||
|
|
|
|||
|
|
@ -237,6 +237,7 @@ user sends prompt ────────────────────
|
|||
├─► (skill/template expansion if not handled) │
|
||||
├─► before_agent_start (can inject message, modify system prompt)
|
||||
├─► agent_start │
|
||||
├─► message_start / message_update / message_end │
|
||||
│ │
|
||||
│ ┌─── turn (repeats while LLM calls tools) ───┐ │
|
||||
│ │ │ │
|
||||
|
|
@ -245,7 +246,9 @@ user sends prompt ────────────────────
|
|||
│ │ │ │
|
||||
│ │ LLM responds, may call tools: │ │
|
||||
│ │ ├─► tool_call (can block) │ │
|
||||
│ │ │ tool executes │ │
|
||||
│ │ ├─► tool_execution_start │ │
|
||||
│ │ ├─► tool_execution_update │ │
|
||||
│ │ ├─► tool_execution_end │ │
|
||||
│ │ └─► tool_result (can modify) │ │
|
||||
│ │ │ │
|
||||
│ └─► turn_end │ │
|
||||
|
|
@ -434,6 +437,46 @@ pi.on("turn_end", async (event, ctx) => {
|
|||
});
|
||||
```
|
||||
|
||||
#### message_start / message_update / message_end
|
||||
|
||||
Fired for message lifecycle updates.
|
||||
|
||||
- `message_start` and `message_end` fire for user, assistant, and toolResult messages.
|
||||
- `message_update` fires for assistant streaming updates.
|
||||
|
||||
```typescript
|
||||
pi.on("message_start", async (event, ctx) => {
|
||||
// event.message
|
||||
});
|
||||
|
||||
pi.on("message_update", async (event, ctx) => {
|
||||
// event.message
|
||||
// event.assistantMessageEvent (token-by-token stream event)
|
||||
});
|
||||
|
||||
pi.on("message_end", async (event, ctx) => {
|
||||
// event.message
|
||||
});
|
||||
```
|
||||
|
||||
#### tool_execution_start / tool_execution_update / tool_execution_end
|
||||
|
||||
Fired for tool execution lifecycle updates.
|
||||
|
||||
```typescript
|
||||
pi.on("tool_execution_start", async (event, ctx) => {
|
||||
// event.toolCallId, event.toolName, event.args
|
||||
});
|
||||
|
||||
pi.on("tool_execution_update", async (event, ctx) => {
|
||||
// event.toolCallId, event.toolName, event.args, event.partialResult
|
||||
});
|
||||
|
||||
pi.on("tool_execution_end", async (event, ctx) => {
|
||||
// event.toolCallId, event.toolName, event.result, event.isError
|
||||
});
|
||||
```
|
||||
|
||||
#### context
|
||||
|
||||
Fired before each LLM call. Modify messages non-destructively. See [session.md](session.md) for message types.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue