docs(coding-agent): add #1375 changelog entry and extension event docs fixes #1375

This commit is contained in:
Mario Zechner 2026-02-12 20:34:24 +01:00
parent ff5148e7cc
commit 6da488a5aa
2 changed files with 48 additions and 1 deletions

View file

@ -6,6 +6,10 @@
- `ContextUsage.tokens` and `ContextUsage.percent` are now `number | null`. After compaction, context token count is unknown until the next LLM response, so these fields return `null`. Extensions that read `ContextUsage` must handle the `null` case. Removed `usageTokens`, `trailingTokens`, and `lastUsageIndex` fields from `ContextUsage` (implementation details that should not have been public) ([#1382](https://github.com/badlogic/pi-mono/pull/1382) by [@ferologics](https://github.com/ferologics))
### Added
- Added extension event forwarding for message and tool execution lifecycles (`message_start`, `message_update`, `message_end`, `tool_execution_start`, `tool_execution_update`, `tool_execution_end`) ([#1375](https://github.com/badlogic/pi-mono/pull/1375) by [@sumeet](https://github.com/sumeet))
### Fixed
- Fixed context usage percentage in footer showing stale pre-compaction values. After compaction the footer now shows `?/200k` until the next LLM response provides accurate usage ([#1382](https://github.com/badlogic/pi-mono/pull/1382) by [@ferologics](https://github.com/ferologics))

View file

@ -237,6 +237,7 @@ user sends prompt ────────────────────
├─► (skill/template expansion if not handled) │
├─► before_agent_start (can inject message, modify system prompt)
├─► agent_start │
├─► message_start / message_update / message_end │
│ │
│ ┌─── turn (repeats while LLM calls tools) ───┐ │
│ │ │ │
@ -245,7 +246,9 @@ user sends prompt ────────────────────
│ │ │ │
│ │ LLM responds, may call tools: │ │
│ │ ├─► tool_call (can block) │ │
│ │ │ tool executes │ │
│ │ ├─► tool_execution_start │ │
│ │ ├─► tool_execution_update │ │
│ │ ├─► tool_execution_end │ │
│ │ └─► tool_result (can modify) │ │
│ │ │ │
│ └─► turn_end │ │
@ -434,6 +437,46 @@ pi.on("turn_end", async (event, ctx) => {
});
```
#### message_start / message_update / message_end
Fired for message lifecycle updates.
- `message_start` and `message_end` fire for user, assistant, and toolResult messages.
- `message_update` fires for assistant streaming updates.
```typescript
pi.on("message_start", async (event, ctx) => {
// event.message
});
pi.on("message_update", async (event, ctx) => {
// event.message
// event.assistantMessageEvent (token-by-token stream event)
});
pi.on("message_end", async (event, ctx) => {
// event.message
});
```
#### tool_execution_start / tool_execution_update / tool_execution_end
Fired for tool execution lifecycle updates.
```typescript
pi.on("tool_execution_start", async (event, ctx) => {
// event.toolCallId, event.toolName, event.args
});
pi.on("tool_execution_update", async (event, ctx) => {
// event.toolCallId, event.toolName, event.args, event.partialResult
});
pi.on("tool_execution_end", async (event, ctx) => {
// event.toolCallId, event.toolName, event.result, event.isError
});
```
#### context
Fired before each LLM call. Modify messages non-destructively. See [session.md](session.md) for message types.