mirror of
https://github.com/getcompanion-ai/co-mono.git
synced 2026-04-15 13:03:42 +00:00
docs: clarify auto-compaction options and fix flicker claim
- Expanded auto-compaction: switch to bigger model and either continue or summarize - Fixed planned feature: MORE flicker than Claude Code (not less)
This commit is contained in:
parent
cf2a0d1c22
commit
c4102c7b81
1 changed files with 2 additions and 2 deletions
|
|
@ -460,10 +460,10 @@ Things that might happen eventually:
|
|||
- **Custom/local models**: Support for Ollama, llama.cpp, vLLM, SGLang, LM Studio via JSON config file
|
||||
- **Auto-compaction**: Currently, watch the context percentage at the bottom. When it approaches 80%, either:
|
||||
- Ask the agent to write a summary .md file you can load in a new session
|
||||
- Switch to a model with bigger context (e.g., Gemini) using `/model` and have it summarize the session
|
||||
- Switch to a model with bigger context (e.g., Gemini) using `/model` and either continue with that model, or let it summarize the session to a .md file to be loaded in a new session
|
||||
- **Message queuing**: Core engine supports it, just needs UI wiring
|
||||
- **Better RPC mode docs**: It works, you'll figure it out (see `test/rpc-example.ts`)
|
||||
- **Less flicker than Claude Code**: One day...
|
||||
- **More flicker than Claude Code**: One day...
|
||||
|
||||
## License
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue