feat(coding-agent): allow starting CLI with prompt in interactive mode (#46)

BREAKING CHANGE: Passing a prompt on the command line now starts interactive
mode with the prompt pre-submitted, instead of exiting after completion.
Use --print or -p to get the previous non-interactive behavior.

- Add --print / -p flag for non-interactive mode
- Update runInteractiveMode to accept initial messages
- Update README documentation
- Fix Model Selection Priority docs to include --models scope
This commit is contained in:
Mario Zechner 2025-11-21 21:57:35 +01:00
parent 3018b01460
commit e89e657045
3 changed files with 48 additions and 15 deletions

View file

@ -8,6 +8,10 @@
- **Model Cycling with Thinking Levels**: The `--models` flag now supports thinking level syntax (e.g., `--models sonnet:high,haiku:low`). When cycling models with `Ctrl+P`, the associated thinking level is automatically applied. The first model in the scope is used as the initial model when starting a new session. Both model and thinking level changes are now saved to session and settings for persistence. ([#47](https://github.com/badlogic/pi-mono/pull/47)) - **Model Cycling with Thinking Levels**: The `--models` flag now supports thinking level syntax (e.g., `--models sonnet:high,haiku:low`). When cycling models with `Ctrl+P`, the associated thinking level is automatically applied. The first model in the scope is used as the initial model when starting a new session. Both model and thinking level changes are now saved to session and settings for persistence. ([#47](https://github.com/badlogic/pi-mono/pull/47))
- **`--thinking` Flag**: New CLI flag to set thinking level directly (e.g., `--thinking high`). Valid values: `off`, `minimal`, `low`, `medium`, `high`. Takes highest priority over all other thinking level sources. ([#45](https://github.com/badlogic/pi-mono/issues/45)) - **`--thinking` Flag**: New CLI flag to set thinking level directly (e.g., `--thinking high`). Valid values: `off`, `minimal`, `low`, `medium`, `high`. Takes highest priority over all other thinking level sources. ([#45](https://github.com/badlogic/pi-mono/issues/45))
### Breaking
- **Interactive Mode with Initial Prompt**: Passing a prompt on the command line (e.g., `pi "List files"`) now starts interactive mode with the prompt pre-submitted, instead of exiting after completion. Use `--print` or `-p` to get the previous non-interactive behavior (e.g., `pi -p "List files"`). This matches Claude CLI (`-p`) and Codex (`exec`) behavior. ([#46](https://github.com/badlogic/pi-mono/issues/46))
### Fixed ### Fixed
- **Slash Command Autocomplete**: Fixed issue where pressing Enter on a highlighted slash command suggestion (e.g., typing `/mod` with `/model` highlighted) would submit the partial text instead of executing the selected command. Now Enter applies the completion and submits in one action. ([#49](https://github.com/badlogic/pi-mono/issues/49)) - **Slash Command Autocomplete**: Fixed issue where pressing Enter on a highlighted slash command suggestion (e.g., typing `/mod` with `/model` highlighted) would submit the partial text instead of executing the selected command. Now Enter applies the completion and submits in one action. ([#49](https://github.com/badlogic/pi-mono/issues/49))

View file

@ -250,10 +250,11 @@ You can add custom HTTP headers to bypass Cloudflare bot detection, add authenti
When starting `pi`, models are selected in this order: When starting `pi`, models are selected in this order:
1. **CLI args**: `--provider` and `--model` flags 1. **CLI args**: `--provider` and `--model` flags
2. **Restored from session**: If using `--continue` or `--resume` 2. **First from `--models` scope**: If `--models` is provided (skipped when using `--continue` or `--resume`)
3. **Saved default**: From `~/.pi/agent/settings.json` (set when you select a model with `/model`) 3. **Restored from session**: If using `--continue` or `--resume`
4. **First available**: First model with a valid API key 4. **Saved default**: From `~/.pi/agent/settings.json` (set when you select a model with `/model`)
5. **None**: Allowed in interactive mode (shows error on message submission) 5. **First available**: First model with a valid API key
6. **None**: Allowed in interactive mode (shows error on message submission)
### Provider Defaults ### Provider Defaults
@ -652,11 +653,14 @@ Custom system prompt. Can be:
If the argument is a valid file path, the file contents will be used as the system prompt. Otherwise, the text is used directly. Project context files and datetime are automatically appended. If the argument is a valid file path, the file contents will be used as the system prompt. Otherwise, the text is used directly. Project context files and datetime are automatically appended.
**--mode <mode>** **--mode <mode>**
Output mode for non-interactive usage. Options: Output mode for non-interactive usage (implies `--print`). Options:
- `text` (default): Output only the final assistant message text - `text` (default): Output only the final assistant message text
- `json`: Stream all agent events as JSON (one event per line). Events are emitted by `@mariozechner/pi-agent` and include message updates, tool executions, and completions - `json`: Stream all agent events as JSON (one event per line). Events are emitted by `@mariozechner/pi-agent` and include message updates, tool executions, and completions
- `rpc`: JSON mode plus stdin listener for headless operation. Send JSON commands on stdin: `{"type":"prompt","message":"..."}` or `{"type":"abort"}`. See [test/rpc-example.ts](test/rpc-example.ts) for a complete example - `rpc`: JSON mode plus stdin listener for headless operation. Send JSON commands on stdin: `{"type":"prompt","message":"..."}` or `{"type":"abort"}`. See [test/rpc-example.ts](test/rpc-example.ts) for a complete example
**--print, -p**
Non-interactive mode: process the prompt(s) and exit. Without this flag, passing a prompt starts interactive mode with the prompt pre-submitted. Similar to Claude's `-p` flag and Codex's `exec` command.
**--no-session** **--no-session**
Don't save session (ephemeral mode) Don't save session (ephemeral mode)
@ -701,10 +705,13 @@ Show help message
# Start interactive mode # Start interactive mode
pi pi
# Single message mode (text output) # Interactive mode with initial prompt (stays running after completion)
pi "List all .ts files in src/" pi "List all .ts files in src/"
# JSON mode - stream all agent events # Non-interactive mode (process prompt and exit)
pi -p "List all .ts files in src/"
# JSON mode - stream all agent events (non-interactive)
pi --mode json "List all .ts files in src/" pi --mode json "List all .ts files in src/"
# RPC mode - headless operation (see test/rpc-example.ts) # RPC mode - headless operation (see test/rpc-example.ts)

View file

@ -47,6 +47,7 @@ interface Args {
noSession?: boolean; noSession?: boolean;
session?: string; session?: string;
models?: string[]; models?: string[];
print?: boolean;
messages: string[]; messages: string[];
} }
@ -94,6 +95,8 @@ function parseArgs(args: string[]): Args {
), ),
); );
} }
} else if (arg === "--print" || arg === "-p") {
result.print = true;
} else if (!arg.startsWith("-")) { } else if (!arg.startsWith("-")) {
result.messages.push(arg); result.messages.push(arg);
} }
@ -114,6 +117,7 @@ ${chalk.bold("Options:")}
--api-key <key> API key (defaults to env vars) --api-key <key> API key (defaults to env vars)
--system-prompt <text> System prompt (default: coding assistant prompt) --system-prompt <text> System prompt (default: coding assistant prompt)
--mode <mode> Output mode: text (default), json, or rpc --mode <mode> Output mode: text (default), json, or rpc
--print, -p Non-interactive mode: process prompt and exit
--continue, -c Continue previous session --continue, -c Continue previous session
--resume, -r Select a session to resume --resume, -r Select a session to resume
--session <path> Use specific session file --session <path> Use specific session file
@ -123,13 +127,16 @@ ${chalk.bold("Options:")}
--help, -h Show this help --help, -h Show this help
${chalk.bold("Examples:")} ${chalk.bold("Examples:")}
# Interactive mode (no messages = interactive TUI) # Interactive mode
pi pi
# Single message # Interactive mode with initial prompt
pi "List all .ts files in src/" pi "List all .ts files in src/"
# Multiple messages # Non-interactive mode (process and exit)
pi -p "List all .ts files in src/"
# Multiple messages (interactive)
pi "Read package.json" "What dependencies do we have?" pi "Read package.json" "What dependencies do we have?"
# Continue previous session # Continue previous session
@ -503,6 +510,7 @@ async function runInteractiveMode(
modelFallbackMessage: string | null = null, modelFallbackMessage: string | null = null,
newVersion: string | null = null, newVersion: string | null = null,
scopedModels: Array<{ model: Model<Api>; thinkingLevel: ThinkingLevel }> = [], scopedModels: Array<{ model: Model<Api>; thinkingLevel: ThinkingLevel }> = [],
initialMessages: string[] = [],
): Promise<void> { ): Promise<void> {
const renderer = new TuiRenderer( const renderer = new TuiRenderer(
agent, agent,
@ -525,6 +533,16 @@ async function runInteractiveMode(
renderer.showWarning(modelFallbackMessage); renderer.showWarning(modelFallbackMessage);
} }
// Process initial messages if provided (from CLI args)
for (const message of initialMessages) {
try {
await agent.prompt(message);
} catch (error: unknown) {
const errorMessage = error instanceof Error ? error.message : "Unknown error occurred";
renderer.showError(errorMessage);
}
}
// Interactive loop // Interactive loop
while (true) { while (true) {
const userInput = await renderer.getUserInput(); const userInput = await renderer.getUserInput();
@ -532,9 +550,10 @@ async function runInteractiveMode(
// Process the message - agent.prompt will add user message and trigger state updates // Process the message - agent.prompt will add user message and trigger state updates
try { try {
await agent.prompt(userInput); await agent.prompt(userInput);
} catch (error: any) { } catch (error: unknown) {
// Display error in the TUI by adding an error message to the chat // Display error in the TUI by adding an error message to the chat
renderer.showError(error.message || "Unknown error occurred"); const errorMessage = error instanceof Error ? error.message : "Unknown error occurred";
renderer.showError(errorMessage);
} }
} }
} }
@ -722,7 +741,9 @@ export async function main(args: string[]) {
} }
// Determine mode early to know if we should print messages and fail early // Determine mode early to know if we should print messages and fail early
const isInteractive = parsed.messages.length === 0 && parsed.mode === undefined; // Interactive mode: no --print flag and no --mode flag
// Having initial messages doesn't make it non-interactive anymore
const isInteractive = !parsed.print && parsed.mode === undefined;
const mode = parsed.mode || "text"; const mode = parsed.mode || "text";
const shouldPrintMessages = isInteractive || mode === "text"; const shouldPrintMessages = isInteractive || mode === "text";
@ -967,7 +988,7 @@ export async function main(args: string[]) {
console.log(chalk.dim(`Model scope: ${modelList} ${chalk.gray("(Ctrl+P to cycle)")}`)); console.log(chalk.dim(`Model scope: ${modelList} ${chalk.gray("(Ctrl+P to cycle)")}`));
} }
// No messages and not RPC - use TUI // Interactive mode - use TUI (may have initial messages from CLI args)
await runInteractiveMode( await runInteractiveMode(
agent, agent,
sessionManager, sessionManager,
@ -977,9 +998,10 @@ export async function main(args: string[]) {
modelFallbackMessage, modelFallbackMessage,
newVersion, newVersion,
scopedModels, scopedModels,
parsed.messages,
); );
} else { } else {
// CLI mode with messages // Non-interactive mode (--print flag or --mode flag)
await runSingleShotMode(agent, sessionManager, parsed.messages, mode); await runSingleShotMode(agent, sessionManager, parsed.messages, mode);
} }
} }