mirror of
https://github.com/getcompanion-ai/co-mono.git
synced 2026-04-17 08:00:59 +00:00
feat(agent): Comprehensive reasoning token support across providers
Added provider-specific reasoning/thinking token support for: - OpenAI (o1, o3, gpt-5): Full reasoning events via Responses API, token counts via Chat Completions - Groq: reasoning_format:"parsed" for Chat Completions, no summary support for Responses - Gemini 2.5: extra_body.google.thinking_config with <thought> tag extraction - OpenRouter: Unified reasoning parameter with message.reasoning field - Anthropic: Limited support via OpenAI compatibility layer Key improvements: - Centralized provider detection based on baseURL - parseReasoningFromMessage() extracts provider-specific reasoning content - adjustRequestForProvider() handles provider-specific request modifications - Smart reasoning support detection with caching per API type - Comprehensive README documentation with provider support matrix Fixes reasoning tokens not appearing for GPT-5 and other reasoning models.
This commit is contained in:
parent
62d9eefc2a
commit
99ce76d66e
5 changed files with 345 additions and 58 deletions
|
|
@ -6,7 +6,7 @@ A comprehensive toolkit for managing Large Language Model (LLM) deployments and
|
|||
- Terminal UI framework with differential rendering and interactive components
|
||||
- AI agent framework with tool calling, session persistence, and multiple renderers
|
||||
- GPU pod management CLI for automated vLLM deployment on various providers
|
||||
- Support for OpenAI, Anthropic, Groq, OpenRouter, and compatible APIs
|
||||
- Support for OpenAI, Anthropic, Groq, OpenRouter, Gemini, and compatible APIs
|
||||
- Built-in file system tools for agentic AI capabilities
|
||||
|
||||
## Tech Stack
|
||||
|
|
|
|||
|
|
@ -1,5 +1,45 @@
|
|||
- agent: token usage gets overwritten with each message that has usage data. however, if the latest data doesn't have a specific usage field, we record undefined i think? also, {"type":"token_usage" "inputTokens":240,"outputTokens":35,"totalTokens":275,"cacheReadTokens":0,"cacheWriteTokens":0} doesn't contain reasoningToken? do we lack initialization?
|
||||
- agent: test for basic functionality, including thinking, completions & responses API support for all the known providers and their endpoints.
|
||||
|
||||
- agent: token usage gets overwritten with each message that has usage data. however, if the latest data doesn't have a specific usage field, we record undefined i think? also, {"type":"token_usage" "inputTokens":240,"outputTokens":35,"totalTokens":275,"cacheReadTokens":0,"cacheWriteTokens":0} doesn't contain reasoningToken? do we lack initialization? See case "token_usage": in renderers. probably need to check if lastXXX > current and use lastXXX.
|
||||
|
||||
-agent: groq responses api throws on second message
|
||||
```
|
||||
➜ pi-mono git:(main) ✗ npx tsx packages/agent/src/cli.ts --base-url https://api.groq.com/openai/v1 --api-key $GROQ_API_KEY --model openai/gpt-oss-120b --api responses
|
||||
>> pi interactive chat <<<
|
||||
Press Escape to interrupt while processing
|
||||
Press CTRL+C to clear the text editor
|
||||
Press CTRL+C twice quickly to exit
|
||||
|
||||
[user]
|
||||
think step by step: what's 2+2?
|
||||
|
||||
[assistant]
|
||||
[thinking]
|
||||
The user asks "think step by step: what's 2+2?" They want a step-by-step reasoning. That's
|
||||
trivial: 2+2=4. Provide answer with steps.
|
||||
|
||||
Sure! Let’s break it down:
|
||||
|
||||
1. Identify the numbers: We have the numbers 2 and 2.
|
||||
2. Add the first number to the second:
|
||||
3. Calculate:
|
||||
|
||||
2 + 2 = 4
|
||||
|
||||
Answer: 2 + 2 = 4.
|
||||
|
||||
[user]
|
||||
what was your last thinking content?
|
||||
|
||||
[assistant]
|
||||
[error] 400 `input`: `items[3]`: `role`: assistant role cannot be used with type='message'
|
||||
(use EasyInputMessage format without type field)
|
||||
```
|
||||
|
||||
- pods: if a pod is down and i run `pi list`, verifying processes says All processes verified. But that can't be true, as we can no longer SSH into the pod to check.
|
||||
|
||||
- agent: start a new agent session. when i press CTRL+C, "Press Ctrl+C again to exit" appears above the text editor followed by an empty line. After about 1 second, the empty line disappears. We should either not show the empty line, or always show the empty line. Maybe Ctrl+C info should be displayed below the text editor.
|
||||
|
||||
- tui: npx tsx test/demo.ts, using /exit or pressing CTRL+C does not work to exit the demo.
|
||||
|
||||
- agent: we need to make system prompt and tools pluggable. We need to figure out the simplest way for users to define system prompts and toolkits. A toolkit could be a subset of the built-in tools, a mixture of a subset of the built-in tools plus custom self-made tools, maybe include MCP servers, and so on. We need to figure out a way to make this super easy. users should be able to write their tools in whatever language they fancy. which means that probably something like process spawning plus studio communication transport would make the most sense. but then we were back at MCP basically. And that does not support interruptibility, which we need for the agent. So if the agent invokes the tool and the user presses escape in the interface, then the tool invocation must be interrupted and whatever it's doing must stop, including killing all sub-processes. For MCP this could be solved for studio MCP servers by, since we spawn those on startup or whenever we load the tools, we spawn a process for an MCP server and then reuse that process for subsequent tool invocations. If the user interrupts then we could just kill that process, assuming that anything it's doing or any of its sub-processes will be killed along the way. So I guess tools could all be written as MCP servers, but that's a lot of overhead. It would also be nice to be able to provide tools just as a bash script that gets some inputs and return some outputs based on the inputs Same for Go apps or TypeScript apps invoked by MPX TSX. just make the barrier of entry for writing your own tools super fucking low. not necessarily going full MCP. but we also need to support MCP. So whatever we arrive at, we then need to take our built-in tools and see if those can be refactored to work with our new tools
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
# Fix Missing Thinking Tokens for GPT-5 and Anthropic Models
|
||||
**Status:** AwaitingCommit
|
||||
**Agent PID:** 27674
|
||||
**Agent PID:** 41002
|
||||
|
||||
## Original Todo
|
||||
agent: we do not get thinking tokens for gpt-5. possibly also not for anthropic models?
|
||||
|
|
@ -25,6 +25,18 @@ The agent doesn't extract or report reasoning/thinking tokens from OpenAI's reas
|
|||
- [x] Fix: Add reasoning support detection for Chat Completions API
|
||||
- [x] Fix: Add correct summary parameter value and increase max_output_tokens for preflight check
|
||||
- [x] Investigate: Chat Completions API has reasoning tokens but no thinking events
|
||||
- [x] Debug: Add logging to understand gpt-5 response structure in responses API
|
||||
- [x] Fix: Change reasoning summary from "auto" to "always" to ensure reasoning text is always returned
|
||||
- [x] Fix: Set correct effort levels - "minimal" for responses API, "low" for completions API
|
||||
- [x] Add note to README about Chat Completions API not returning thinking content
|
||||
- [x] Add Gemini API example to README
|
||||
- [x] Verify Gemini thinking token support and update README accordingly
|
||||
- [x] Add special case for Gemini to include extra_body with thinking_config
|
||||
- [x] Add special case for Groq responses API (doesn't support reasoning.summary)
|
||||
- [x] Refactor: Create centralized provider-specific request adjustment function
|
||||
- [x] Refactor: Extract message content parsing into parseReasoningFromMessage() function
|
||||
- [x] Test: Verify Groq reasoning extraction works with refactored code
|
||||
- [x] Test: Verify Gemini thinking extraction works with refactored code
|
||||
|
||||
## Notes
|
||||
User reported that o3 model with responses API doesn't show reasoning tokens or thinking events.
|
||||
|
|
@ -36,5 +48,11 @@ Fixed by:
|
|||
5. Parsing both reasoning_text (o1/o3) and summary_text (gpt-5) formats
|
||||
6. Displaying reasoning tokens in console and TUI renderers with ⚡ symbol
|
||||
7. Properly handling reasoning_effort for Chat Completions API
|
||||
8. Set correct effort levels: "minimal" for Responses API, "low" for Chat Completions API
|
||||
9. Set summary to "always" for Responses API
|
||||
|
||||
**Important finding**: Chat Completions API by design only returns reasoning token *counts* but not the actual thinking/reasoning content for o1 models. This is expected behavior - only the Responses API exposes thinking events.
|
||||
**Important findings**:
|
||||
- Chat Completions API by design only returns reasoning token *counts* but not the actual thinking/reasoning content for o1 models. This is expected behavior - only the Responses API exposes thinking events.
|
||||
- GPT-5 models currently return empty summary arrays even with `summary: "detailed"` - the model indicates it "can't share step-by-step reasoning". This appears to be a model limitation/behavior rather than a code issue.
|
||||
- The reasoning tokens ARE being used and counted correctly when the model chooses to use them.
|
||||
- With effort="minimal" and summary="detailed", gpt-5 sometimes chooses not to use reasoning at all for simple questions.
|
||||
Loading…
Add table
Add a link
Reference in a new issue