feat(opencode): add SSE event replay with Last-Event-ID support

This commit is contained in:
Nathan Flurry 2026-02-07 12:57:58 -08:00
parent 52f5d07185
commit 783e2d6692
19 changed files with 337 additions and 69 deletions

165
.claude/commands/release.md Normal file
View file

@ -0,0 +1,165 @@
# Release Agent
You are a release agent for the Gigacode project (sandbox-agent). Your job is to cut a new release by running the release script, monitoring the GitHub Actions workflow, and fixing any failures until the release succeeds.
## Step 1: Gather Release Information
Ask the user what type of release they want to cut:
- **patch** - Bug fixes (e.g., 0.1.8 -> 0.1.9)
- **minor** - New features (e.g., 0.1.8 -> 0.2.0)
- **major** - Breaking changes (e.g., 0.1.8 -> 1.0.0)
- **rc** - Release candidate (e.g., 0.2.0-rc.1)
For **rc** releases, also ask:
1. What base version the RC is for (e.g., 0.2.0). If the user doesn't specify, determine it by bumping the minor version from the current version.
2. What RC number (e.g., 1, 2, 3). If the user doesn't specify, check existing git tags to auto-determine the next RC number:
```bash
git tag -l "v<base_version>-rc.*" | sort -V
```
If no prior RC tags exist for that base version, use `rc.1`. Otherwise, increment the highest existing RC number.
The final RC version string is `<base_version>-rc.<number>` (e.g., `0.2.0-rc.1`).
## Step 2: Confirm Release Details
Before proceeding, display the release details to the user and ask for explicit confirmation:
- Current version (read from `Cargo.toml` workspace.package.version)
- New version
- Current branch
- Whether it will be tagged as "latest" (RC releases are never tagged as latest)
Do NOT proceed without user confirmation.
## Step 3: Run the Release Script (Setup Local)
The release script handles version bumping, local checks, committing, pushing, and triggering the workflow.
For **major**, **minor**, or **patch** releases:
```bash
echo "yes" | ./scripts/release/main.ts --<type> --phase setup-local
```
For **rc** releases (using explicit version):
```bash
echo "yes" | ./scripts/release/main.ts --version <version> --phase setup-local
```
Where `<type>` is `major`, `minor`, or `patch`, and `<version>` is the full RC version string like `0.2.0-rc.1`.
The `--phase setup-local` runs these steps in order:
1. Confirms release details (interactive prompt - piping "yes" handles this)
2. Updates version in all files (Cargo.toml, package.json files)
3. Runs local checks (cargo check, cargo fmt, pnpm typecheck)
4. Git commits with message `chore(release): update version to X.Y.Z`
5. Git pushes
6. Triggers the GitHub Actions workflow
If local checks fail at step 3, fix the issues in the codebase, then re-run using `--only-steps` to avoid re-running already-completed steps:
```bash
echo "yes" | ./scripts/release/main.ts --version <version> --only-steps run-local-checks,git-commit,git-push,trigger-workflow
```
## Step 4: Monitor the GitHub Actions Workflow
After the workflow is triggered, wait 5 seconds for it to register, then begin polling.
### Find the workflow run
```bash
gh run list --workflow=release.yaml --limit=1 --json databaseId,status,conclusion,createdAt,url
```
Verify the run was created recently (within the last 2 minutes) to confirm you are monitoring the correct run. Save the `databaseId` as the run ID.
### Poll for completion
Poll every 15 seconds using:
```bash
gh run view <run-id> --json status,conclusion
```
Report progress to the user periodically (every ~60 seconds or when status changes). The status values are:
- `queued` / `in_progress` / `waiting` - Still running, keep polling
- `completed` - Done, check `conclusion`
When `status` is `completed`, check `conclusion`:
- `success` - Release succeeded! Proceed to Step 6.
- `failure` - Proceed to Step 5.
- `cancelled` - Inform the user and stop.
## Step 5: Handle Workflow Failures
If the workflow fails:
### 5a. Get failure logs
```bash
gh run view <run-id> --log-failed
```
### 5b. Analyze the error
Read the failure logs carefully. Common failure categories:
- **Build failures** (cargo build, TypeScript compilation) - Fix the code
- **Formatting issues** (cargo fmt) - Run `cargo fmt` and commit
- **Test failures** - Fix the failing tests
- **Publishing failures** (crates.io, npm) - These may be transient; check if retry will help
- **Docker build failures** - Check Dockerfile or build script issues
- **Infrastructure/transient failures** (network timeouts, rate limits) - Just re-trigger without code changes
### 5c. Fix and re-push
If a code fix is needed:
1. Make the fix in the codebase
2. Amend the release commit (since the release version commit is the most recent):
```bash
git add -A
git commit --amend --no-edit
git push --force-with-lease
```
IMPORTANT: Use `--force-with-lease` (not `--force`) for safety. Amend the commit rather than creating a new one so the release stays as a single version-bump commit.
3. Re-trigger the workflow:
```bash
gh workflow run .github/workflows/release.yaml \
-f version=<version> \
-f latest=<true|false> \
--ref <branch>
```
Where `<branch>` is the current branch (usually `main`). Set `latest` to `false` for RC releases, `true` for stable releases that are newer than the current latest tag.
4. Return to Step 4 to monitor the new run.
If no code fix is needed (transient failure), skip straight to re-triggering the workflow (step 3 above).
### 5d. Retry limit
If the workflow has failed **5 times**, stop and report all errors to the user. Ask whether they want to continue retrying or abort the release. Do not retry infinitely.
## Step 6: Report Success
When the workflow completes successfully:
1. Print the GitHub Actions run URL
2. Print the new version number
3. Suggest running post-release testing: "Run `/project:post-release-testing` to verify the release works correctly."
## Important Notes
- The product name is "Gigacode" (capital G, lowercase c). The CLI binary is `gigacode` (lowercase).
- Do not include co-authors in any commit messages.
- Use conventional commits style (e.g., `chore(release): update version to X.Y.Z`).
- Keep commit messages to a single line.
- The release script requires `tsx` to run (it's a TypeScript file with a shebang).
- Always work on the current branch. Releases are typically cut from `main`.

View file

@ -3,7 +3,7 @@ resolver = "2"
members = ["server/packages/*", "gigacode"]
[workspace.package]
version = "0.1.8"
version = "0.1.9"
edition = "2021"
authors = [ "Rivet Gaming, LLC <developer@rivet.gg>" ]
license = "Apache-2.0"
@ -12,12 +12,12 @@ description = "Universal API for automatic coding agents in sandboxes. Supprots
[workspace.dependencies]
# Internal crates
sandbox-agent = { version = "0.1.8", path = "server/packages/sandbox-agent" }
sandbox-agent-error = { version = "0.1.8", path = "server/packages/error" }
sandbox-agent-agent-management = { version = "0.1.8", path = "server/packages/agent-management" }
sandbox-agent-agent-credentials = { version = "0.1.8", path = "server/packages/agent-credentials" }
sandbox-agent-universal-agent-schema = { version = "0.1.8", path = "server/packages/universal-agent-schema" }
sandbox-agent-extracted-agent-schemas = { version = "0.1.8", path = "server/packages/extracted-agent-schemas" }
sandbox-agent = { version = "0.1.9", path = "server/packages/sandbox-agent" }
sandbox-agent-error = { version = "0.1.9", path = "server/packages/error" }
sandbox-agent-agent-management = { version = "0.1.9", path = "server/packages/agent-management" }
sandbox-agent-agent-credentials = { version = "0.1.9", path = "server/packages/agent-credentials" }
sandbox-agent-universal-agent-schema = { version = "0.1.9", path = "server/packages/universal-agent-schema" }
sandbox-agent-extracted-agent-schemas = { version = "0.1.9", path = "server/packages/extracted-agent-schemas" }
# Serialization
serde = { version = "1.0", features = ["derive"] }

View file

@ -10,7 +10,7 @@
"license": {
"name": "Apache-2.0"
},
"version": "0.1.8"
"version": "0.1.9"
},
"servers": [
{

View file

@ -1,6 +1,6 @@
{
"name": "@sandbox-agent/cli-shared",
"version": "0.1.8",
"version": "0.1.9",
"description": "Shared helpers for sandbox-agent CLI and SDK",
"license": "Apache-2.0",
"repository": {

View file

@ -1,6 +1,6 @@
{
"name": "@sandbox-agent/cli",
"version": "0.1.8",
"version": "0.1.9",
"description": "CLI for sandbox-agent - run AI coding agents in sandboxes",
"license": "Apache-2.0",
"repository": {

View file

@ -1,6 +1,6 @@
{
"name": "@sandbox-agent/cli-darwin-arm64",
"version": "0.1.8",
"version": "0.1.9",
"description": "sandbox-agent CLI binary for macOS ARM64",
"license": "Apache-2.0",
"repository": {

View file

@ -1,6 +1,6 @@
{
"name": "@sandbox-agent/cli-darwin-x64",
"version": "0.1.8",
"version": "0.1.9",
"description": "sandbox-agent CLI binary for macOS x64",
"license": "Apache-2.0",
"repository": {

View file

@ -1,6 +1,6 @@
{
"name": "@sandbox-agent/cli-linux-arm64",
"version": "0.1.8",
"version": "0.1.9",
"description": "sandbox-agent CLI binary for Linux arm64",
"license": "Apache-2.0",
"repository": {

View file

@ -1,6 +1,6 @@
{
"name": "@sandbox-agent/cli-linux-x64",
"version": "0.1.8",
"version": "0.1.9",
"description": "sandbox-agent CLI binary for Linux x64",
"license": "Apache-2.0",
"repository": {

View file

@ -1,6 +1,6 @@
{
"name": "@sandbox-agent/cli-win32-x64",
"version": "0.1.8",
"version": "0.1.9",
"description": "sandbox-agent CLI binary for Windows x64",
"license": "Apache-2.0",
"repository": {

View file

@ -1,6 +1,6 @@
{
"name": "@sandbox-agent/gigacode",
"version": "0.1.8",
"version": "0.1.9",
"description": "Gigacode CLI (sandbox-agent with OpenCode attach by default)",
"license": "Apache-2.0",
"repository": {

View file

@ -1,6 +1,6 @@
{
"name": "@sandbox-agent/gigacode-darwin-arm64",
"version": "0.1.8",
"version": "0.1.9",
"description": "gigacode CLI binary for macOS arm64",
"license": "Apache-2.0",
"repository": {

View file

@ -1,6 +1,6 @@
{
"name": "@sandbox-agent/gigacode-darwin-x64",
"version": "0.1.8",
"version": "0.1.9",
"description": "gigacode CLI binary for macOS x64",
"license": "Apache-2.0",
"repository": {

View file

@ -1,6 +1,6 @@
{
"name": "@sandbox-agent/gigacode-linux-arm64",
"version": "0.1.8",
"version": "0.1.9",
"description": "gigacode CLI binary for Linux arm64",
"license": "Apache-2.0",
"repository": {

View file

@ -1,6 +1,6 @@
{
"name": "@sandbox-agent/gigacode-linux-x64",
"version": "0.1.8",
"version": "0.1.9",
"description": "gigacode CLI binary for Linux x64",
"license": "Apache-2.0",
"repository": {

View file

@ -1,6 +1,6 @@
{
"name": "@sandbox-agent/gigacode-win32-x64",
"version": "0.1.8",
"version": "0.1.9",
"description": "gigacode CLI binary for Windows x64",
"license": "Apache-2.0",
"repository": {

View file

@ -1,6 +1,6 @@
{
"name": "sandbox-agent",
"version": "0.1.8",
"version": "0.1.9",
"description": "Universal API for automatic coding agents in sandboxes. Supprots Claude Code, Codex, OpenCode, and Amp.",
"license": "Apache-2.0",
"repository": {

View file

@ -410,7 +410,9 @@ pub fn stop(host: &str, port: u16) -> Result<(), CliError> {
// No PID file - but check if daemon is actually running via health check
// This can happen if PID file was deleted but daemon is still running
if check_health(&base_url, None)? {
eprintln!("daemon is running but PID file missing; finding process on port {port}...");
eprintln!(
"daemon is running but PID file missing; finding process on port {port}..."
);
if let Some(pid) = find_process_on_port(port) {
eprintln!("found daemon process {pid}");
return stop_process(pid, host, port, &pid_path);

View file

@ -4,11 +4,12 @@
//! stubbed responses with deterministic helpers for snapshot testing. A minimal
//! in-memory state tracks sessions/messages/ptys to keep behavior coherent.
use std::collections::{BTreeMap, HashMap, HashSet};
use std::collections::{BTreeMap, HashMap, HashSet, VecDeque};
use std::convert::Infallible;
use std::str::FromStr;
use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::Arc;
use std::sync::Mutex as StdMutex;
use axum::body::Body;
use axum::extract::{Path, Query, State};
@ -45,10 +46,18 @@ static MESSAGE_COUNTER: AtomicU64 = AtomicU64::new(1);
static PART_COUNTER: AtomicU64 = AtomicU64::new(1);
static PTY_COUNTER: AtomicU64 = AtomicU64::new(1);
static PROJECT_COUNTER: AtomicU64 = AtomicU64::new(1);
const OPENCODE_EVENT_CHANNEL_SIZE: usize = 2048;
const OPENCODE_EVENT_LOG_SIZE: usize = 4096;
const OPENCODE_DEFAULT_MODEL_ID: &str = "mock";
const OPENCODE_DEFAULT_PROVIDER_ID: &str = "mock";
const OPENCODE_DEFAULT_AGENT_MODE: &str = "build";
#[derive(Clone, Debug)]
struct OpenCodeStreamEvent {
id: u64,
payload: Value,
}
#[derive(Clone, Debug)]
struct OpenCodeCompatConfig {
fixed_time_ms: Option<i64>,
@ -278,13 +287,15 @@ pub struct OpenCodeState {
questions: Mutex<HashMap<String, OpenCodeQuestionRecord>>,
session_runtime: Mutex<HashMap<String, OpenCodeSessionRuntime>>,
session_streams: Mutex<HashMap<String, bool>>,
event_broadcaster: broadcast::Sender<Value>,
event_broadcaster: broadcast::Sender<OpenCodeStreamEvent>,
event_log: StdMutex<VecDeque<OpenCodeStreamEvent>>,
next_event_id: AtomicU64,
model_cache: Mutex<Option<OpenCodeModelCache>>,
}
impl OpenCodeState {
pub fn new() -> Self {
let (event_broadcaster, _) = broadcast::channel(256);
let (event_broadcaster, _) = broadcast::channel(OPENCODE_EVENT_CHANNEL_SIZE);
let project_id = format!("proj_{}", PROJECT_COUNTER.fetch_add(1, Ordering::Relaxed));
Self {
config: OpenCodeCompatConfig::from_env(),
@ -297,16 +308,44 @@ impl OpenCodeState {
session_runtime: Mutex::new(HashMap::new()),
session_streams: Mutex::new(HashMap::new()),
event_broadcaster,
event_log: StdMutex::new(VecDeque::new()),
next_event_id: AtomicU64::new(1),
model_cache: Mutex::new(None),
}
}
pub fn subscribe(&self) -> broadcast::Receiver<Value> {
fn subscribe(&self) -> broadcast::Receiver<OpenCodeStreamEvent> {
self.event_broadcaster.subscribe()
}
pub fn emit_event(&self, event: Value) {
let _ = self.event_broadcaster.send(event);
let stream_event = OpenCodeStreamEvent {
id: self.next_event_id.fetch_add(1, Ordering::Relaxed),
payload: event,
};
if let Ok(mut log) = self.event_log.lock() {
log.push_back(stream_event.clone());
if log.len() > OPENCODE_EVENT_LOG_SIZE {
let overflow = log.len() - OPENCODE_EVENT_LOG_SIZE;
for _ in 0..overflow {
let _ = log.pop_front();
}
}
}
let _ = self.event_broadcaster.send(stream_event);
}
fn buffered_events_after(&self, last_event_id: Option<u64>) -> Vec<OpenCodeStreamEvent> {
let Some(last_event_id) = last_event_id else {
return Vec::new();
};
let Ok(log) = self.event_log.lock() else {
return Vec::new();
};
log.iter()
.filter(|event| event.id > last_event_id)
.cloned()
.collect()
}
fn now_ms(&self) -> i64 {
@ -2986,6 +3025,13 @@ async fn oc_config_providers(State(state): State<Arc<OpenCodeAppState>>) -> impl
(StatusCode::OK, Json(providers))
}
fn parse_last_event_id(headers: &HeaderMap) -> Option<u64> {
headers
.get("last-event-id")
.and_then(|value| value.to_str().ok())
.and_then(|value| value.trim().parse::<u64>().ok())
}
#[utoipa::path(
get,
path = "/event",
@ -2997,6 +3043,7 @@ async fn oc_event_subscribe(
headers: HeaderMap,
Query(query): Query<DirectoryQuery>,
) -> Sse<impl futures::Stream<Item = Result<Event, Infallible>>> {
let last_event_id = parse_last_event_id(&headers);
let receiver = state.opencode.subscribe();
let directory = state
.opencode
@ -3013,35 +3060,61 @@ async fn oc_event_subscribe(
"branch": branch,
}
}));
let replay_events = state.opencode.buffered_events_after(last_event_id);
let replay_cursor = replay_events
.last()
.map(|event| event.id)
.or(last_event_id)
.unwrap_or(0);
let heartbeat_payload = json!({
"type": "server.heartbeat",
"properties": {}
});
let stream = stream::unfold(
(receiver, interval(std::time::Duration::from_secs(30))),
move |(mut rx, mut ticker)| {
(
receiver,
interval(std::time::Duration::from_secs(30)),
VecDeque::from(replay_events),
replay_cursor,
),
move |(mut rx, mut ticker, mut replay, replay_cursor)| {
let heartbeat = heartbeat_payload.clone();
async move {
tokio::select! {
_ = ticker.tick() => {
let sse_event = Event::default()
.json_data(&heartbeat)
.unwrap_or_else(|_| Event::default().data("{}"));
Some((Ok(sse_event), (rx, ticker)))
}
event = rx.recv() => {
match event {
Ok(event) => {
let sse_event = Event::default()
.json_data(&event)
.unwrap_or_else(|_| Event::default().data("{}"));
Some((Ok(sse_event), (rx, ticker)))
if let Some(event) = replay.pop_front() {
let sse_event = Event::default()
.id(event.id.to_string())
.json_data(&event.payload)
.unwrap_or_else(|_| Event::default().data("{}"));
return Some((Ok(sse_event), (rx, ticker, replay, replay_cursor)));
}
loop {
tokio::select! {
_ = ticker.tick() => {
let sse_event = Event::default()
.json_data(&heartbeat)
.unwrap_or_else(|_| Event::default().data("{}"));
return Some((Ok(sse_event), (rx, ticker, replay, replay_cursor)));
}
event = rx.recv() => {
match event {
Ok(event) => {
if event.id <= replay_cursor {
continue;
}
let sse_event = Event::default()
.id(event.id.to_string())
.json_data(&event.payload)
.unwrap_or_else(|_| Event::default().data("{}"));
return Some((Ok(sse_event), (rx, ticker, replay, replay_cursor)));
}
Err(broadcast::error::RecvError::Lagged(skipped)) => {
warn!(skipped, "opencode event stream lagged");
return Some((Ok(Event::default().comment("lagged")), (rx, ticker, replay, replay_cursor)));
}
Err(broadcast::error::RecvError::Closed) => return None,
}
Err(broadcast::error::RecvError::Lagged(_)) => {
Some((Ok(Event::default().comment("lagged")), (rx, ticker)))
}
Err(broadcast::error::RecvError::Closed) => None,
}
}
}
@ -3063,6 +3136,7 @@ async fn oc_global_event(
headers: HeaderMap,
Query(query): Query<DirectoryQuery>,
) -> Sse<impl futures::Stream<Item = Result<Event, Infallible>>> {
let last_event_id = parse_last_event_id(&headers);
let receiver = state.opencode.subscribe();
let directory = state
.opencode
@ -3079,6 +3153,12 @@ async fn oc_global_event(
"branch": branch,
}
}));
let replay_events = state.opencode.buffered_events_after(last_event_id);
let replay_cursor = replay_events
.last()
.map(|event| event.id)
.or(last_event_id)
.unwrap_or(0);
let heartbeat_payload = json!({
"payload": {
@ -3087,31 +3167,52 @@ async fn oc_global_event(
}
});
let stream = stream::unfold(
(receiver, interval(std::time::Duration::from_secs(30))),
move |(mut rx, mut ticker)| {
(
receiver,
interval(std::time::Duration::from_secs(30)),
VecDeque::from(replay_events),
replay_cursor,
),
move |(mut rx, mut ticker, mut replay, replay_cursor)| {
let directory = directory.clone();
let heartbeat = heartbeat_payload.clone();
async move {
tokio::select! {
_ = ticker.tick() => {
let sse_event = Event::default()
.json_data(&heartbeat)
.unwrap_or_else(|_| Event::default().data("{}"));
Some((Ok(sse_event), (rx, ticker)))
}
event = rx.recv() => {
match event {
Ok(event) => {
let payload = json!({"directory": directory, "payload": event});
let sse_event = Event::default()
.json_data(&payload)
.unwrap_or_else(|_| Event::default().data("{}"));
Some((Ok(sse_event), (rx, ticker)))
if let Some(event) = replay.pop_front() {
let payload = json!({"directory": directory, "payload": event.payload});
let sse_event = Event::default()
.id(event.id.to_string())
.json_data(&payload)
.unwrap_or_else(|_| Event::default().data("{}"));
return Some((Ok(sse_event), (rx, ticker, replay, replay_cursor)));
}
loop {
tokio::select! {
_ = ticker.tick() => {
let sse_event = Event::default()
.json_data(&heartbeat)
.unwrap_or_else(|_| Event::default().data("{}"));
return Some((Ok(sse_event), (rx, ticker, replay, replay_cursor)));
}
event = rx.recv() => {
match event {
Ok(event) => {
if event.id <= replay_cursor {
continue;
}
let payload = json!({"directory": directory, "payload": event.payload});
let sse_event = Event::default()
.id(event.id.to_string())
.json_data(&payload)
.unwrap_or_else(|_| Event::default().data("{}"));
return Some((Ok(sse_event), (rx, ticker, replay, replay_cursor)));
}
Err(broadcast::error::RecvError::Lagged(skipped)) => {
warn!(skipped, "opencode global event stream lagged");
return Some((Ok(Event::default().comment("lagged")), (rx, ticker, replay, replay_cursor)));
}
Err(broadcast::error::RecvError::Closed) => return None,
}
Err(broadcast::error::RecvError::Lagged(_)) => {
Some((Ok(Event::default().comment("lagged")), (rx, ticker)))
}
Err(broadcast::error::RecvError::Closed) => None,
}
}
}