docs: clarify auto-compaction options and fix flicker claim

- Expanded auto-compaction: switch to bigger model and either continue or summarize
- Fixed planned feature: MORE flicker than Claude Code (not less)
This commit is contained in:
Mario Zechner 2025-11-12 23:22:40 +01:00
parent cf2a0d1c22
commit c4102c7b81

View file

@ -460,10 +460,10 @@ Things that might happen eventually:
- **Custom/local models**: Support for Ollama, llama.cpp, vLLM, SGLang, LM Studio via JSON config file
- **Auto-compaction**: Currently, watch the context percentage at the bottom. When it approaches 80%, either:
- Ask the agent to write a summary .md file you can load in a new session
- Switch to a model with bigger context (e.g., Gemini) using `/model` and have it summarize the session
- Switch to a model with bigger context (e.g., Gemini) using `/model` and either continue with that model, or let it summarize the session to a .md file to be loaded in a new session
- **Message queuing**: Core engine supports it, just needs UI wiring
- **Better RPC mode docs**: It works, you'll figure it out (see `test/rpc-example.ts`)
- **Less flicker than Claude Code**: One day...
- **More flicker than Claude Code**: One day...
## License