mirror of
https://github.com/harivansh-afk/sandbox-agent.git
synced 2026-04-17 02:04:13 +00:00
wip
This commit is contained in:
parent
3263d4f5e1
commit
0fbea6ce61
166 changed files with 6675 additions and 7105 deletions
|
|
@ -15,8 +15,8 @@ The root cause of the duplicate HTTP request is unknown. It is not `appWorkspace
|
|||
### Attempted Fix / Workaround
|
||||
|
||||
1. Made `completeAppGithubAuth` clear `oauthState`/`oauthStateExpiresAt` immediately after validation and before `exchangeCode`, so any duplicate request fails the state check instead of hitting GitHub with a consumed code.
|
||||
2. Split `syncGithubSessionFromToken` into a fast path (`initGithubSession` — exchange code, get viewer, store token+identity) and a slow path (`syncGithubOrganizations` — list orgs, list installations, sync each workspace).
|
||||
3. `completeAppGithubAuth` now uses the fast path and enqueues the slow org sync to the workspace workflow queue (`workspace.command.syncGithubSession`, fire-and-forget). The HTTP callback returns a 302 redirect in ~2s instead of ~18s, eliminating the proxy timeout window.
|
||||
2. Split `syncGithubSessionFromToken` into a fast path (`initGithubSession` — exchange code, get viewer, store token+identity) and a slow path (`syncGithubOrganizations` — list orgs, list installations, sync each organization).
|
||||
3. `completeAppGithubAuth` now uses the fast path and enqueues the slow org sync to the organization workflow queue (`organization.command.syncGithubSession`, fire-and-forget). The HTTP callback returns a 302 redirect in ~2s instead of ~18s, eliminating the proxy timeout window.
|
||||
4. The frontend already polls `getAppSnapshot` every 500ms when any org has `syncStatus === "syncing"`, so the deferred sync is transparent to the user.
|
||||
5. `bootstrapAppGithubSession` (dev-only) still calls the full synchronous `syncGithubSessionFromToken` since proxy timeouts are not a concern in dev and it needs the session fully populated before returning.
|
||||
|
||||
|
|
@ -38,14 +38,14 @@ Verifying the BaseUI frontend against the real `rivet-dev/sandbox-agent-testing`
|
|||
|
||||
Three separate issues stacked together during live verification:
|
||||
|
||||
1. A half-created task actor remained in project indexes after earlier runtime failures. The actor state existed, but its durable task row did not, so repo overview polling spammed `Task not found` and kept trying to load an orphaned task.
|
||||
1. A half-created task actor remained in repository indexes after earlier runtime failures. The actor state existed, but its durable task row did not, so repo overview polling spammed `Task not found` and kept trying to load an orphaned task.
|
||||
2. Rebuilding the backend container outside `just dev` dropped injected GitHub auth, which made repo overview fall back to `Open PRs 0` until `GITHUB_TOKEN`/`GH_TOKEN` were passed back into `docker compose`.
|
||||
3. In the create-task modal, the BaseUI-controlled form looked populated in the browser, but submit gating/click behavior was unreliable under browser automation, making it hard to distinguish frontend state bugs from backend failures.
|
||||
|
||||
### Attempted Fix / Workaround
|
||||
|
||||
1. Updated project-actor stale task pruning to treat `Task not found:` the same as actor-not-found and rebuilt the backend image.
|
||||
2. Recovered the orphaned task by forcing an initialize attempt, which surfaced a missing `body?.providerId` guard in the task init workflow and led to pruning the stale project index row.
|
||||
1. Updated repository-actor stale task pruning to treat `Task not found:` the same as actor-not-found and rebuilt the backend image.
|
||||
2. Recovered the orphaned task by forcing an initialize attempt, which surfaced a missing `body?.providerId` guard in the task init workflow and led to pruning the stale repository index row.
|
||||
3. Recreated the backend with `GITHUB_TOKEN="$(gh auth token)" GH_TOKEN="$(gh auth token)" docker compose ... up -d --build backend` so PR sync could see live GitHub data again.
|
||||
4. Used `agent-browser` plus screenshots to separate working paths (repo overview + PR visibility) from the remaining broken path (modal submit / task creation UI).
|
||||
|
||||
|
|
@ -80,22 +80,22 @@ The Docker dev backend container was starting on Bun `1.2.23` and accepting TCP
|
|||
|
||||
### What I Was Working On
|
||||
|
||||
Implementing Daytona snapshot-based sandbox creation and running required workspace validation.
|
||||
Implementing Daytona snapshot-based sandbox creation and running required organization validation.
|
||||
|
||||
### Friction / Issue
|
||||
|
||||
The workspace `node_modules` tree is partially root-owned in this environment. `pnpm install`/cleanup failed with `EACCES` and left missing local tool entrypoints (for example `turbo`/`typescript`), which blocked `pnpm -w typecheck/build/test` from running end-to-end.
|
||||
The organization `node_modules` tree is partially root-owned in this environment. `pnpm install`/cleanup failed with `EACCES` and left missing local tool entrypoints (for example `turbo`/`typescript`), which blocked `pnpm -w typecheck/build/test` from running end-to-end.
|
||||
|
||||
### Attempted Fix / Workaround
|
||||
|
||||
1. Attempted workspace reinstall (`pnpm install`, `CI=true pnpm install`) and package-level reinstall.
|
||||
1. Attempted organization reinstall (`pnpm install`, `CI=true pnpm install`) and package-level reinstall.
|
||||
2. Attempted cleanup/recreate of `node_modules`, but root-owned files could not be removed.
|
||||
3. Added temporary local shims for missing tool entrypoints to continue targeted validation.
|
||||
|
||||
### Outcome
|
||||
|
||||
- Daytona-specific changes and backend tests were validated.
|
||||
- Full workspace validation remains blocked until `node_modules` ownership is repaired (or container is recreated).
|
||||
- Full organization validation remains blocked until `node_modules` ownership is repaired (or container is recreated).
|
||||
|
||||
## 2026-02-16 - uncommitted
|
||||
|
||||
|
|
@ -187,7 +187,7 @@ Vitest ESM module namespace exports are non-configurable, so `vi.spyOn(childProc
|
|||
### Outcome
|
||||
|
||||
- Backend manager tests are stable under ESM.
|
||||
- Full workspace tests pass with lifecycle coverage for outdated-backend restart behavior.
|
||||
- Full organization tests pass with lifecycle coverage for outdated-backend restart behavior.
|
||||
|
||||
## 2026-02-08 - uncommitted
|
||||
|
||||
|
|
@ -202,8 +202,8 @@ The environment did not provide `rg`, and docs/policy files still described Rust
|
|||
### Attempted Fix / Workaround
|
||||
|
||||
1. Switched repository discovery to `find`/`grep`.
|
||||
2. Rewrote project guidance files (`CLAUDE.md`, `skills/SKILL.md`, docs, `SPEC.md`) to match the TypeScript architecture.
|
||||
3. Added missing TUI test coverage so workspace-wide test runs no longer fail on packages without tests.
|
||||
2. Rewrote repository guidance files (`CLAUDE.md`, `skills/SKILL.md`, docs, `SPEC.md`) to match the TypeScript architecture.
|
||||
3. Added missing TUI test coverage so monorepo-wide test runs no longer fail on packages without tests.
|
||||
|
||||
### Outcome
|
||||
|
||||
|
|
@ -214,7 +214,7 @@ The environment did not provide `rg`, and docs/policy files still described Rust
|
|||
|
||||
### What I Was Working On
|
||||
|
||||
Running full workspace test validation (`pnpm -w test`) for the migrated monorepo.
|
||||
Running full organization test validation (`pnpm -w test`) for the migrated monorepo.
|
||||
|
||||
### Friction / Issue
|
||||
|
||||
|
|
@ -228,7 +228,7 @@ Backend integration tests depend on native `better-sqlite3` bindings, which were
|
|||
|
||||
### Outcome
|
||||
|
||||
- Full workspace test suite passes consistently.
|
||||
- Full organization test suite passes consistently.
|
||||
- Backend unit coverage always runs; DB integration tests run automatically on environments with native bindings.
|
||||
|
||||
## 2026-02-09 - aab1012 (working tree)
|
||||
|
|
@ -309,13 +309,13 @@ Running backend tests with the integration flag enabled triggered unrelated acto
|
|||
### Attempted Fix / Workaround
|
||||
|
||||
1. Switched to package-targeted test runs for deterministic coverage (`@sandbox-agent/foundry-backend` + `@sandbox-agent/foundry-frontend`).
|
||||
2. Relied on required workspace validation (`pnpm -w typecheck`, `pnpm -w build`, `pnpm -w test`) plus targeted stack test files.
|
||||
2. Relied on required organization validation (`pnpm -w typecheck`, `pnpm -w build`, `pnpm -w test`) plus targeted stack test files.
|
||||
3. Stopped the runaway integration run and recorded this friction for follow-up.
|
||||
|
||||
### Outcome
|
||||
|
||||
- New stack-focused tests pass in deterministic targeted runs.
|
||||
- Full required workspace checks pass.
|
||||
- Full required organization checks pass.
|
||||
- Integration-gated suite remains noisy and needs separate stabilization.
|
||||
|
||||
## 2026-03-05 - uncommitted
|
||||
|
|
@ -326,7 +326,7 @@ Reviewing architecture for simplification opportunities.
|
|||
|
||||
### Friction / Issue
|
||||
|
||||
Considered merging `projectPrSync` (30s) and `projectBranchSync` (5s) into a single `projectSync` actor that polls at the faster cadence and does PR fetches every Nth tick. This would reduce actor count by one per repo but violates the single-responsibility-per-actor pattern established in the codebase. Mixed cadences within one actor add conditional tick logic, make the polling intervals harder to reason about independently, and couple two unrelated data sources (git branches vs GitHub API) into one failure domain.
|
||||
Considered merging `repositoryPrSync` (30s) and `repositoryBranchSync` (5s) into a single `repositorySync` actor that polls at the faster cadence and does PR fetches every Nth tick. This would reduce actor count by one per repo but violates the single-responsibility-per-actor pattern established in the codebase. Mixed cadences within one actor add conditional tick logic, make the polling intervals harder to reason about independently, and couple two unrelated data sources (git branches vs GitHub API) into one failure domain.
|
||||
|
||||
### Attempted Fix / Workaround
|
||||
|
||||
|
|
@ -334,7 +334,7 @@ None — rejected the idea during review.
|
|||
|
||||
### Outcome
|
||||
|
||||
- Keep `projectPrSync` and `projectBranchSync` as separate actors.
|
||||
- Keep `repositoryPrSync` and `repositoryBranchSync` as separate actors.
|
||||
- Single-responsibility-per-sync-actor is the right pattern for this codebase.
|
||||
|
||||
## 2026-03-06 - 77341ff
|
||||
|
|
@ -345,13 +345,13 @@ Bringing up the Docker-based local dev stack with `just dev` after the BaseUI fr
|
|||
|
||||
### Friction / Issue
|
||||
|
||||
Docker Desktop recovered, but the frontend container failed immediately with `Cannot find module @rollup/rollup-linux-arm64-gnu`. The dev compose setup bind-mounted the host workspace into `/app`, so the Linux container picked up macOS `node_modules` and missed Rollup's Linux optional package.
|
||||
Docker Desktop recovered, but the frontend container failed immediately with `Cannot find module @rollup/rollup-linux-arm64-gnu`. The dev compose setup bind-mounted the host organization into `/app`, so the Linux container picked up macOS `node_modules` and missed Rollup's Linux optional package.
|
||||
|
||||
### Attempted Fix / Workaround
|
||||
|
||||
1. Confirmed Docker itself was healthy again by checking the Unix socket, `docker version`, and the backend health endpoint.
|
||||
2. Reproduced the frontend crash inside `docker compose`.
|
||||
3. Changed the frontend dev service to use named volumes for workspace `node_modules` and the pnpm store, and to run `pnpm install --frozen-lockfile` inside the container before starting Vite.
|
||||
3. Changed the frontend dev service to use named volumes for organization `node_modules` and the pnpm store, and to run `pnpm install --frozen-lockfile` inside the container before starting Vite.
|
||||
|
||||
### Outcome
|
||||
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ Resolving GitHub OAuth callback failures caused by stale actor state after squas
|
|||
|
||||
2. **No programmatic way to list or destroy actors on Rivet Cloud without the service key.** The public runner token (`pk_*`) lacks permissions for actor management (list/destroy). The Cloud API token (`cloud_api_*`) in our `.env` was returning "token not found". The actual working token format is the service key (`sk_*`) from the namespace connection URL. This was not documented — the destroy docs reference "admin tokens" which are described as "currently not supported on Rivet Cloud" ([#3530](https://github.com/rivet-dev/rivet/issues/3530)), but the `sk_*` token works. The disconnect between the docs and reality cost significant debugging time.
|
||||
|
||||
3. **Actor errors during `getOrCreate` are opaque.** When the `workspace.completeAppGithubAuth` action triggered `getOrCreate` for org workspace actors, the migration failure inside the newly-woken actor was surfaced as `"Internal error"` with no indication that it was a migration/schema issue. The actual error (`table already exists`) was only visible in actor-level logs, not in the action response or the calling backend's logs.
|
||||
3. **Actor errors during `getOrCreate` are opaque.** When the `organization.completeAppGithubAuth` action triggered `getOrCreate` for org organization actors, the migration failure inside the newly-woken actor was surfaced as `"Internal error"` with no indication that it was a migration/schema issue. The actual error (`table already exists`) was only visible in actor-level logs, not in the action response or the calling backend's logs.
|
||||
|
||||
### Attempted Fix / Workaround
|
||||
|
||||
|
|
@ -22,7 +22,7 @@ Resolving GitHub OAuth callback failures caused by stale actor state after squas
|
|||
|
||||
### Outcome
|
||||
|
||||
- All 4 stale workspace actors destroyed (3 org workspaces + 1 old v2-prefixed app workspace).
|
||||
- All 4 stale organization actors destroyed (3 org organizations + 1 old v2-prefixed app organization).
|
||||
- Reverted `IF NOT EXISTS` migration changes so Drizzle migrations remain standard.
|
||||
- After redeploy, new actors will be created fresh with the correct squashed migration journal.
|
||||
- **RivetKit improvement opportunities:**
|
||||
|
|
@ -112,17 +112,17 @@ Diagnosing stuck tasks (`init_create_sandbox`) after switching to a linked Rivet
|
|||
### Friction / Issue
|
||||
|
||||
1. File-system driver actor-state writes still attempted to serialize legacy `kvStorage`, which can exceed Bare's buffer limit and trigger `Failed to save actor state: BareError: (byte:0) too large buffer`.
|
||||
2. Project snapshots swallowed missing task actors and only logged warnings, so stale `task_index` rows persisted and appeared as stuck/ghost tasks in the UI.
|
||||
2. Repository snapshots swallowed missing task actors and only logged warnings, so stale `task_index` rows persisted and appeared as stuck/ghost tasks in the UI.
|
||||
|
||||
### Attempted Fix / Workaround
|
||||
|
||||
1. In RivetKit file-system driver writes, force persisted `kvStorage` to `[]` (runtime KV is SQLite-only) so oversized legacy payloads are never re-serialized.
|
||||
2. In backend project actor flows (`hydrate`, `snapshot`, `repo overview`, branch registration, PR-close archive), detect `Actor not found` and prune stale `task_index` rows immediately.
|
||||
2. In backend repository actor flows (`hydrate`, `snapshot`, `repo overview`, branch registration, PR-close archive), detect `Actor not found` and prune stale `task_index` rows immediately.
|
||||
|
||||
### Outcome
|
||||
|
||||
- Prevents repeated serialization crashes caused by legacy oversized state blobs.
|
||||
- Missing task actors are now self-healed from project indexes instead of repeatedly surfacing as silent warnings.
|
||||
- Missing task actors are now self-healed from repository indexes instead of repeatedly surfacing as silent warnings.
|
||||
|
||||
## 2026-02-12 - uncommitted
|
||||
|
||||
|
|
@ -193,7 +193,7 @@ Adopt these concrete repo conventions:
|
|||
|
||||
- Schema rule (critical):
|
||||
- SQLite is **per actor instance**, not a shared DB across all instances.
|
||||
- Do not “namespace” rows with `workspaceId`/`repoId`/`taskId` columns when those identifiers already live in the actor key/state.
|
||||
- Do not “namespace” rows with `organizationId`/`repoId`/`taskId` columns when those identifiers already live in the actor key/state.
|
||||
- Prefer single-row tables for single-instance storage (e.g. `id=1`) when appropriate.
|
||||
|
||||
- Migration generation flow (Bun + DrizzleKit):
|
||||
|
|
@ -247,7 +247,7 @@ Verifying Daytona-backed task/session flows for the new frontend and sandbox-ins
|
|||
|
||||
### Friction / Issue
|
||||
|
||||
Task workflow steps intermittently entered failed state with `StepExhaustedError` and `unknown error` during initialization replay (`init-start-sandbox-instance`, then `init-write-db`), which caused `task.get` to time out and cascaded into `project snapshot timed out` / `workspace list_tasks timed out`.
|
||||
Task workflow steps intermittently entered failed state with `StepExhaustedError` and `unknown error` during initialization replay (`init-start-sandbox-instance`, then `init-write-db`), which caused `task.get` to time out and cascaded into `repository snapshot timed out` / `organization list_tasks timed out`.
|
||||
|
||||
### Attempted Fix / Workaround
|
||||
|
||||
|
|
@ -305,7 +305,7 @@ if (msg.type === "TickProjectRefresh") {
|
|||
|
||||
// Coalesce duplicate ticks for a short window.
|
||||
while (Date.now() < deadline) {
|
||||
const next = await c.queue.next("project", { timeout: deadline - Date.now() });
|
||||
const next = await c.queue.next("repository", { timeout: deadline - Date.now() });
|
||||
if (!next) break; // timeout
|
||||
|
||||
if (next.type === "TickProjectRefresh") {
|
||||
|
|
@ -348,7 +348,7 @@ Two mistakes in the prior proposal:
|
|||
|
||||
2. **Coalesce by message names, not `msg.type`.**
|
||||
- Keep one message name per command/tick channel.
|
||||
- When a tick window opens, drain and coalesce multiple tick names (e.g. `tick.project.refresh`, `tick.pr.refresh`, `tick.sandbox.health`) into one execution per name.
|
||||
- When a tick window opens, drain and coalesce multiple tick names (e.g. `tick.repository.refresh`, `tick.pr.refresh`, `tick.sandbox.health`) into one execution per name.
|
||||
|
||||
3. **Tick coalesce pattern with timeout (single loop):**
|
||||
|
||||
|
|
@ -375,7 +375,7 @@ while (true) {
|
|||
// Timeout reached => one or more ticks are due.
|
||||
const due = new Set<string>();
|
||||
const at = Date.now();
|
||||
if (at >= nextProjectRefreshAt) due.add("tick.project.refresh");
|
||||
if (at >= nextProjectRefreshAt) due.add("tick.repository.refresh");
|
||||
if (at >= nextPrRefreshAt) due.add("tick.pr.refresh");
|
||||
if (at >= nextSandboxHealthAt) due.add("tick.sandbox.health");
|
||||
|
||||
|
|
@ -388,7 +388,7 @@ while (true) {
|
|||
}
|
||||
|
||||
// Execute each due tick once, in deterministic order.
|
||||
if (due.has("tick.project.refresh")) {
|
||||
if (due.has("tick.repository.refresh")) {
|
||||
await refreshProjectSnapshot();
|
||||
nextProjectRefreshAt = Date.now() + 5_000;
|
||||
}
|
||||
|
|
@ -424,7 +424,7 @@ Even with queue-timeout ticks, packing multiple independent timer cadences into
|
|||
### Final Pattern
|
||||
|
||||
1. **Parent actors are command-only loops with no timeout.**
|
||||
- `WorkspaceActor`, `ProjectActor`, `TaskActor`, and `HistoryActor` wait on queue messages only.
|
||||
- `OrganizationActor`, `RepositoryActor`, `TaskActor`, and `HistoryActor` wait on queue messages only.
|
||||
|
||||
2. **Periodic work moves to dedicated child sync actors.**
|
||||
- Each child actor has exactly one timeout cadence (e.g. PR sync, branch sync, task status sync).
|
||||
|
|
@ -439,7 +439,7 @@ Even with queue-timeout ticks, packing multiple independent timer cadences into
|
|||
|
||||
### Example Structure
|
||||
|
||||
- `ProjectActor` (no timeout): handles commands + applies `project.pr_sync.result` / `project.branch_sync.result` writes.
|
||||
- `RepositoryActor` (no timeout): handles commands + applies `repository.pr_sync.result` / `repository.branch_sync.result` writes.
|
||||
- `ProjectPrSyncActor` (timeout 30s): polls PR data, sends result message.
|
||||
- `ProjectBranchSyncActor` (timeout 5s): polls branch data, sends result message.
|
||||
- `TaskActor` (no timeout): handles lifecycle + applies `task.status_sync.result` writes.
|
||||
|
|
@ -502,7 +502,7 @@ Removing custom backend REST endpoints and migrating CLI/TUI calls to direct `ri
|
|||
|
||||
### Friction / Issue
|
||||
|
||||
We had implemented a `/v1/*` HTTP shim (`/v1/tasks`, `/v1/workspaces/use`, etc.) between clients and actors, which duplicated actor APIs and introduced an unnecessary transport layer.
|
||||
We had implemented a `/v1/*` HTTP shim (`/v1/tasks`, `/v1/organizations/use`, etc.) between clients and actors, which duplicated actor APIs and introduced an unnecessary transport layer.
|
||||
|
||||
### Attempted Fix / Workaround
|
||||
|
||||
|
|
@ -575,21 +575,21 @@ Removing `*Actor` suffix from all actor export names and registry keys.
|
|||
|
||||
### Friction / Issue
|
||||
|
||||
RivetKit's `setup({ use: { ... } })` uses property names as actor identifiers in `client.<name>` calls. All 8 actors were exported as `workspaceActor`, `projectActor`, `taskActor`, etc., which meant client code used verbose `client.workspaceActor.getOrCreate(...)` instead of `client.workspace.getOrCreate(...)`.
|
||||
RivetKit's `setup({ use: { ... } })` uses property names as actor identifiers in `client.<name>` calls. All 8 actors were exported as `organization`, `repository`, `taskActor`, etc., which meant client code used verbose `client.organization.getOrCreate(...)` instead of `client.organization.getOrCreate(...)`.
|
||||
|
||||
The `Actor` suffix is redundant — everything in the registry is an actor by definition. It also leaked into type names (`WorkspaceActorHandle`, `ProjectActorInput`, `HistoryActorInput`) and local function names (`workspaceActorKey`, `taskActorKey`).
|
||||
|
||||
### Attempted Fix / Workaround
|
||||
|
||||
1. Renamed all 8 actor exports: `workspaceActor` → `workspace`, `projectActor` → `project`, `taskActor` → `task`, `sandboxInstanceActor` → `sandboxInstance`, `historyActor` → `history`, `projectPrSyncActor` → `projectPrSync`, `projectBranchSyncActor` → `projectBranchSync`, `taskStatusSyncActor` → `taskStatusSync`.
|
||||
1. Renamed all 8 actor exports: `organization` → `organization`, `repository` → `repository`, `taskActor` → `task`, `sandboxInstanceActor` → `sandboxInstance`, `historyActor` → `history`, `repositoryPrSync` → `repositoryPrSync`, `repositoryBranchSync` → `repositoryBranchSync`, `taskStatusSyncActor` → `taskStatusSync`.
|
||||
2. Updated registry keys in `actors/index.ts`.
|
||||
3. Renamed all `client.<name>Actor` references across 14 files (actor definitions, backend entry, CLI client, tests).
|
||||
4. Renamed associated types (`ProjectActorInput` → `ProjectInput`, `HistoryActorInput` → `HistoryInput`, `WorkspaceActorHandle` → `WorkspaceHandle`, `TaskActorHandle` → `TaskHandle`).
|
||||
4. Renamed associated types (`ProjectActorInput` → `RepositoryInput`, `HistoryActorInput` → `HistoryInput`, `WorkspaceActorHandle` → `OrganizationHandle`, `TaskActorHandle` → `TaskHandle`).
|
||||
|
||||
### Outcome
|
||||
|
||||
- Actor names are now concise and match their semantic role.
|
||||
- Client code reads naturally: `client.workspace.getOrCreate(...)`, `client.task.get(...)`.
|
||||
- Client code reads naturally: `client.organization.getOrCreate(...)`, `client.task.get(...)`.
|
||||
- No runtime behavior change — registry property names drive actor routing.
|
||||
|
||||
## 2026-02-09 - uncommitted
|
||||
|
|
@ -609,8 +609,8 @@ Concrete examples from our codebase:
|
|||
|
||||
| Actor | Pattern | Why |
|
||||
|-------|---------|-----|
|
||||
| `workspace` | Plain run | Every handler is a DB query or single actor delegation |
|
||||
| `project` | Plain run | Handlers are DB upserts or delegate to task actor |
|
||||
| `organization` | Plain run | Every handler is a DB query or single actor delegation |
|
||||
| `repository` | Plain run | Handlers are DB upserts or delegate to task actor |
|
||||
| `task` | **Needs workflow** | `initialize` is a 7-step pipeline (createSandbox → ensureAgent → createSession → DB writes → start child actors); post-idle is a 5-step pipeline (commit → push → PR → cache → notify) |
|
||||
| `history` | Plain run | Single DB insert per message |
|
||||
| `sandboxInstance` | Plain run | Single-table CRUD per message |
|
||||
|
|
@ -647,7 +647,7 @@ This matters when reasoning about workflow `listen()` behavior: you might assume
|
|||
RivetKit docs should clarify:
|
||||
|
||||
1. Queue names are **per-actor-instance** — two different actor instances can use the same queue name without collision.
|
||||
2. The dotted naming convention (e.g. `project.command.ensure`) is a user convention for readability, not a routing hierarchy.
|
||||
2. The dotted naming convention (e.g. `repository.command.ensure`) is a user convention for readability, not a routing hierarchy.
|
||||
3. `c.queue.next(["a", "b"])` listens on queues named `"a"` and `"b"` *within this actor*, not across actors.
|
||||
|
||||
### Outcome
|
||||
|
|
@ -662,7 +662,7 @@ Migrating task actor to durable workflows. AI-generated queue names used dotted
|
|||
|
||||
### Friction / Issue
|
||||
|
||||
When generating actor queue names, the AI (and our own codebase) defaulted to dotted names like `task.command.initialize`, `project.pr_sync.result`, `task.status_sync.control.start`. These work fine in plain `run` loops, but create friction when interacting with the workflow system because `workflowQueueName()` prefixes them with `__workflow:`, producing names like `__workflow:task.command.initialize`.
|
||||
When generating actor queue names, the AI (and our own codebase) defaulted to dotted names like `task.command.initialize`, `repository.pr_sync.result`, `task.status_sync.control.start`. These work fine in plain `run` loops, but create friction when interacting with the workflow system because `workflowQueueName()` prefixes them with `__workflow:`, producing names like `__workflow:task.command.initialize`.
|
||||
|
||||
Queue names should always be **camelCase** (e.g. `initializeTask`, `statusSyncResult`, `attachTask`). Dotted names are misleading — they imply hierarchy or routing semantics that don't exist (queues are flat, per-actor-instance strings). They also look like object property paths, which causes confusion when used as dynamic property keys on queue handles (`actor.queue["task.command.initialize"]`).
|
||||
|
||||
|
|
@ -754,4 +754,4 @@ Using `better-sqlite3` and `node:sqlite` in backend DB bootstrap caused Bun runt
|
|||
|
||||
- Backend starts successfully under Bun.
|
||||
- Shared Drizzle/SQLite actor DB path still works.
|
||||
- Workspace build + tests pass.
|
||||
- Organization build + tests pass.
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Implementing provider adapters (`worktree`, `daytona`) under the backend package
|
|||
|
||||
### Friction / Issue
|
||||
|
||||
Provider interface intentionally keeps `DestroySandboxRequest` minimal (`workspaceId`, `sandboxId`), but local git worktree cleanup may need repo context.
|
||||
Provider interface intentionally keeps `DestroySandboxRequest` minimal (`organizationId`, `sandboxId`), but local git worktree cleanup may need repo context.
|
||||
|
||||
### Attempted Fix / Workaround
|
||||
|
||||
|
|
@ -54,8 +54,8 @@ The previous end-to-end flow implicitly depended on local filesystem paths (`rep
|
|||
|
||||
### Attempted Fix / Workaround
|
||||
|
||||
1. Introduced explicit repo remote records (`WorkspaceActor.addRepo`) and validated remotes with `git ls-remote`.
|
||||
2. Made `ProjectActor` assert a backend-owned local clone exists on wake and fetch remote branch state from that clone.
|
||||
1. Introduced explicit imported repository records sourced from GitHub sync instead of local organization paths.
|
||||
2. Made `RepositoryActor` assert a backend-owned local clone exists on wake and fetch remote branch state from that clone.
|
||||
3. Updated PR creation to avoid requiring a checked-out branch by using `gh pr create --head <branch>`.
|
||||
4. Updated `DaytonaProvider.createSandbox` to clone the repo and checkout the branch into a deterministic workdir and return it as `cwd` for sandbox-agent sessions.
|
||||
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
Replace the current polling + empty-notification + full-refetch architecture with a push-based realtime system. The client subscribes to topics, receives the initial state, and then receives full replacement payloads for changed entities over WebSocket. No polling. No re-fetching.
|
||||
|
||||
This spec covers three layers: backend (materialized state + broadcast), client library (interest manager), and frontend (hook consumption). Comment architecture-related code throughout so new contributors can understand the data flow from comments alone.
|
||||
This spec covers three layers: backend (materialized state + broadcast), client library (subscription manager), and frontend (hook consumption). Comment architecture-related code throughout so new contributors can understand the data flow from comments alone.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -17,7 +17,7 @@ This spec covers three layers: backend (materialized state + broadcast), client
|
|||
Currently `WorkbenchTask` is a single flat type carrying everything (sidebar fields + transcripts + diffs + file tree). Split it:
|
||||
|
||||
```typescript
|
||||
/** Sidebar-level task data. Materialized in the workspace actor's SQLite. */
|
||||
/** Sidebar-level task data. Materialized in the organization actor's SQLite. */
|
||||
export interface WorkbenchTaskSummary {
|
||||
id: string;
|
||||
repoId: string;
|
||||
|
|
@ -44,7 +44,7 @@ export interface WorkbenchSessionSummary {
|
|||
created: boolean;
|
||||
}
|
||||
|
||||
/** Repo-level summary for workspace sidebar. */
|
||||
/** Repo-level summary for organization sidebar. */
|
||||
export interface WorkbenchRepoSummary {
|
||||
id: string;
|
||||
label: string;
|
||||
|
|
@ -93,9 +93,9 @@ export interface WorkbenchSessionDetail {
|
|||
transcript: WorkbenchTranscriptEvent[];
|
||||
}
|
||||
|
||||
/** Workspace-level snapshot — initial fetch for the workspace topic. */
|
||||
export interface WorkspaceSummarySnapshot {
|
||||
workspaceId: string;
|
||||
/** Organization-level snapshot — initial fetch for the organization topic. */
|
||||
export interface OrganizationSummarySnapshot {
|
||||
organizationId: string;
|
||||
repos: WorkbenchRepoSummary[];
|
||||
taskSummaries: WorkbenchTaskSummary[];
|
||||
}
|
||||
|
|
@ -110,8 +110,8 @@ Remove the old `TaskWorkbenchSnapshot` type and `WorkbenchTask` type once migrat
|
|||
Each event carries the full new state of the changed entity — not a patch, not an empty notification.
|
||||
|
||||
```typescript
|
||||
/** Workspace-level events broadcast by the workspace actor. */
|
||||
export type WorkspaceEvent =
|
||||
/** Organization-level events broadcast by the organization actor. */
|
||||
export type OrganizationEvent =
|
||||
| { type: "taskSummaryUpdated"; taskSummary: WorkbenchTaskSummary }
|
||||
| { type: "taskRemoved"; taskId: string }
|
||||
| { type: "repoAdded"; repo: WorkbenchRepoSummary }
|
||||
|
|
@ -126,7 +126,7 @@ export type TaskEvent =
|
|||
export type SessionEvent =
|
||||
| { type: "sessionUpdated"; session: WorkbenchSessionDetail };
|
||||
|
||||
/** App-level events broadcast by the app workspace actor. */
|
||||
/** App-level events broadcast by the app organization actor. */
|
||||
export type AppEvent =
|
||||
| { type: "appUpdated"; snapshot: FoundryAppSnapshot };
|
||||
|
||||
|
|
@ -139,13 +139,13 @@ export type SandboxProcessesEvent =
|
|||
|
||||
## 2. Backend: Materialized State + Broadcasts
|
||||
|
||||
### 2.1 Workspace actor — materialized sidebar state
|
||||
### 2.1 Organization actor — materialized sidebar state
|
||||
|
||||
**Files:**
|
||||
- `packages/backend/src/actors/workspace/db/schema.ts` — add tables
|
||||
- `packages/backend/src/actors/workspace/actions.ts` — replace `buildWorkbenchSnapshot`, add delta handlers
|
||||
- `packages/backend/src/actors/organization/db/schema.ts` — add tables
|
||||
- `packages/backend/src/actors/organization/actions.ts` — replace `buildWorkbenchSnapshot`, add delta handlers
|
||||
|
||||
Add to workspace actor SQLite schema:
|
||||
Add to organization actor SQLite schema:
|
||||
|
||||
```typescript
|
||||
export const taskSummaries = sqliteTable("task_summaries", {
|
||||
|
|
@ -161,7 +161,7 @@ export const taskSummaries = sqliteTable("task_summaries", {
|
|||
});
|
||||
```
|
||||
|
||||
New workspace actions:
|
||||
New organization actions:
|
||||
|
||||
```typescript
|
||||
/**
|
||||
|
|
@ -176,23 +176,23 @@ async applyTaskSummaryUpdate(c, input: { taskSummary: WorkbenchTaskSummary }) {
|
|||
await c.db.insert(taskSummaries).values(toRow(input.taskSummary))
|
||||
.onConflictDoUpdate({ target: taskSummaries.taskId, set: toRow(input.taskSummary) }).run();
|
||||
// Broadcast to connected clients
|
||||
c.broadcast("workspaceUpdated", { type: "taskSummaryUpdated", taskSummary: input.taskSummary });
|
||||
c.broadcast("organizationUpdated", { type: "taskSummaryUpdated", taskSummary: input.taskSummary });
|
||||
}
|
||||
|
||||
async removeTaskSummary(c, input: { taskId: string }) {
|
||||
await c.db.delete(taskSummaries).where(eq(taskSummaries.taskId, input.taskId)).run();
|
||||
c.broadcast("workspaceUpdated", { type: "taskRemoved", taskId: input.taskId });
|
||||
c.broadcast("organizationUpdated", { type: "taskRemoved", taskId: input.taskId });
|
||||
}
|
||||
|
||||
/**
|
||||
* Initial fetch for the workspace topic.
|
||||
* Initial fetch for the organization topic.
|
||||
* Reads entirely from local SQLite — no fan-out to child actors.
|
||||
*/
|
||||
async getWorkspaceSummary(c, input: { workspaceId: string }): Promise<WorkspaceSummarySnapshot> {
|
||||
async getWorkspaceSummary(c, input: { organizationId: string }): Promise<OrganizationSummarySnapshot> {
|
||||
const repoRows = await c.db.select().from(repos).orderBy(desc(repos.updatedAt)).all();
|
||||
const taskRows = await c.db.select().from(taskSummaries).orderBy(desc(taskSummaries.updatedAtMs)).all();
|
||||
return {
|
||||
workspaceId: c.state.workspaceId,
|
||||
organizationId: c.state.organizationId,
|
||||
repos: repoRows.map(toRepoSummary),
|
||||
taskSummaries: taskRows.map(toTaskSummary),
|
||||
};
|
||||
|
|
@ -201,7 +201,7 @@ async getWorkspaceSummary(c, input: { workspaceId: string }): Promise<WorkspaceS
|
|||
|
||||
Replace `buildWorkbenchSnapshot` (the fan-out) — keep it only as a `reconcileWorkbenchState` background action for recovery/rebuild.
|
||||
|
||||
### 2.2 Task actor — push summaries to workspace + broadcast detail
|
||||
### 2.2 Task actor — push summaries to organization + broadcast detail
|
||||
|
||||
**Files:**
|
||||
- `packages/backend/src/actors/task/workbench.ts` — replace `notifyWorkbenchUpdated` calls
|
||||
|
|
@ -209,7 +209,7 @@ Replace `buildWorkbenchSnapshot` (the fan-out) — keep it only as a `reconcileW
|
|||
Every place that currently calls `notifyWorkbenchUpdated(c)` (there are ~20 call sites) must instead:
|
||||
|
||||
1. Build the current `WorkbenchTaskSummary` from local state.
|
||||
2. Push it to the workspace actor: `workspace.applyTaskSummaryUpdate({ taskSummary })`.
|
||||
2. Push it to the organization actor: `organization.applyTaskSummaryUpdate({ taskSummary })`.
|
||||
3. Build the current `WorkbenchTaskDetail` from local state.
|
||||
4. Broadcast to directly-connected clients: `c.broadcast("taskUpdated", { type: "taskDetailUpdated", detail })`.
|
||||
5. If session state changed, also broadcast: `c.broadcast("sessionUpdated", { type: "sessionUpdated", session: buildSessionDetail(c, sessionId) })`.
|
||||
|
|
@ -219,7 +219,7 @@ Add helper functions:
|
|||
```typescript
|
||||
/**
|
||||
* Builds a WorkbenchTaskSummary from local task actor state.
|
||||
* This is what gets pushed to the workspace actor for sidebar materialization.
|
||||
* This is what gets pushed to the organization actor for sidebar materialization.
|
||||
*/
|
||||
function buildTaskSummary(c: any): WorkbenchTaskSummary { ... }
|
||||
|
||||
|
|
@ -237,12 +237,12 @@ function buildSessionDetail(c: any, sessionId: string): WorkbenchSessionDetail {
|
|||
|
||||
/**
|
||||
* Replaces the old notifyWorkbenchUpdated pattern.
|
||||
* Pushes summary to workspace actor + broadcasts detail to direct subscribers.
|
||||
* Pushes summary to organization actor + broadcasts detail to direct subscribers.
|
||||
*/
|
||||
async function broadcastTaskUpdate(c: any, options?: { sessionId?: string }) {
|
||||
// Push summary to parent workspace actor
|
||||
const workspace = await getOrCreateWorkspace(c, c.state.workspaceId);
|
||||
await workspace.applyTaskSummaryUpdate({ taskSummary: buildTaskSummary(c) });
|
||||
// Push summary to parent organization actor
|
||||
const organization = await getOrCreateOrganization(c, c.state.organizationId);
|
||||
await organization.applyTaskSummaryUpdate({ taskSummary: buildTaskSummary(c) });
|
||||
|
||||
// Broadcast detail to clients connected to this task
|
||||
c.broadcast("taskUpdated", { type: "taskDetailUpdated", detail: buildTaskDetail(c) });
|
||||
|
|
@ -273,9 +273,9 @@ async getTaskDetail(c): Promise<WorkbenchTaskDetail> { ... }
|
|||
async getSessionDetail(c, input: { sessionId: string }): Promise<WorkbenchSessionDetail> { ... }
|
||||
```
|
||||
|
||||
### 2.4 App workspace actor
|
||||
### 2.4 App organization actor
|
||||
|
||||
**File:** `packages/backend/src/actors/workspace/app-shell.ts`
|
||||
**File:** `packages/backend/src/actors/organization/app-shell.ts`
|
||||
|
||||
Change `c.broadcast("appUpdated", { at: Date.now(), sessionId })` to:
|
||||
```typescript
|
||||
|
|
@ -304,12 +304,12 @@ function broadcastProcessesUpdated(c: any): void {
|
|||
|
||||
```typescript
|
||||
/**
|
||||
* Topic definitions for the interest manager.
|
||||
* Topic definitions for the subscription manager.
|
||||
*
|
||||
* Each topic defines how to connect to an actor, fetch initial state,
|
||||
* which event to listen for, and how to apply incoming events to cached state.
|
||||
*
|
||||
* The interest manager uses these definitions to manage WebSocket connections,
|
||||
* The subscription manager uses these definitions to manage WebSocket connections,
|
||||
* cached state, and subscriptions for all realtime data flows.
|
||||
*/
|
||||
|
||||
|
|
@ -331,10 +331,10 @@ export interface TopicDefinition<TData, TParams, TEvent> {
|
|||
}
|
||||
|
||||
export interface AppTopicParams {}
|
||||
export interface WorkspaceTopicParams { workspaceId: string }
|
||||
export interface TaskTopicParams { workspaceId: string; repoId: string; taskId: string }
|
||||
export interface SessionTopicParams { workspaceId: string; repoId: string; taskId: string; sessionId: string }
|
||||
export interface SandboxProcessesTopicParams { workspaceId: string; providerId: string; sandboxId: string }
|
||||
export interface OrganizationTopicParams { organizationId: string }
|
||||
export interface TaskTopicParams { organizationId: string; repoId: string; taskId: string }
|
||||
export interface SessionTopicParams { organizationId: string; repoId: string; taskId: string; sessionId: string }
|
||||
export interface SandboxProcessesTopicParams { organizationId: string; providerId: string; sandboxId: string }
|
||||
|
||||
export const topicDefinitions = {
|
||||
app: {
|
||||
|
|
@ -345,12 +345,12 @@ export const topicDefinitions = {
|
|||
applyEvent: (_current, event: AppEvent) => event.snapshot,
|
||||
} satisfies TopicDefinition<FoundryAppSnapshot, AppTopicParams, AppEvent>,
|
||||
|
||||
workspace: {
|
||||
key: (p) => `workspace:${p.workspaceId}`,
|
||||
event: "workspaceUpdated",
|
||||
connect: (b, p) => b.connectWorkspace(p.workspaceId),
|
||||
fetchInitial: (b, p) => b.getWorkspaceSummary(p.workspaceId),
|
||||
applyEvent: (current, event: WorkspaceEvent) => {
|
||||
organization: {
|
||||
key: (p) => `organization:${p.organizationId}`,
|
||||
event: "organizationUpdated",
|
||||
connect: (b, p) => b.connectWorkspace(p.organizationId),
|
||||
fetchInitial: (b, p) => b.getWorkspaceSummary(p.organizationId),
|
||||
applyEvent: (current, event: OrganizationEvent) => {
|
||||
switch (event.type) {
|
||||
case "taskSummaryUpdated":
|
||||
return {
|
||||
|
|
@ -375,22 +375,22 @@ export const topicDefinitions = {
|
|||
};
|
||||
}
|
||||
},
|
||||
} satisfies TopicDefinition<WorkspaceSummarySnapshot, WorkspaceTopicParams, WorkspaceEvent>,
|
||||
} satisfies TopicDefinition<OrganizationSummarySnapshot, OrganizationTopicParams, OrganizationEvent>,
|
||||
|
||||
task: {
|
||||
key: (p) => `task:${p.workspaceId}:${p.taskId}`,
|
||||
key: (p) => `task:${p.organizationId}:${p.taskId}`,
|
||||
event: "taskUpdated",
|
||||
connect: (b, p) => b.connectTask(p.workspaceId, p.repoId, p.taskId),
|
||||
fetchInitial: (b, p) => b.getTaskDetail(p.workspaceId, p.repoId, p.taskId),
|
||||
connect: (b, p) => b.connectTask(p.organizationId, p.repoId, p.taskId),
|
||||
fetchInitial: (b, p) => b.getTaskDetail(p.organizationId, p.repoId, p.taskId),
|
||||
applyEvent: (_current, event: TaskEvent) => event.detail,
|
||||
} satisfies TopicDefinition<WorkbenchTaskDetail, TaskTopicParams, TaskEvent>,
|
||||
|
||||
session: {
|
||||
key: (p) => `session:${p.workspaceId}:${p.taskId}:${p.sessionId}`,
|
||||
key: (p) => `session:${p.organizationId}:${p.taskId}:${p.sessionId}`,
|
||||
event: "sessionUpdated",
|
||||
// Reuses the task actor connection — same actor, different event.
|
||||
connect: (b, p) => b.connectTask(p.workspaceId, p.repoId, p.taskId),
|
||||
fetchInitial: (b, p) => b.getSessionDetail(p.workspaceId, p.repoId, p.taskId, p.sessionId),
|
||||
connect: (b, p) => b.connectTask(p.organizationId, p.repoId, p.taskId),
|
||||
fetchInitial: (b, p) => b.getSessionDetail(p.organizationId, p.repoId, p.taskId, p.sessionId),
|
||||
applyEvent: (current, event: SessionEvent) => {
|
||||
// Filter: only apply if this event is for our session
|
||||
if (event.session.sessionId !== current.sessionId) return current;
|
||||
|
|
@ -399,10 +399,10 @@ export const topicDefinitions = {
|
|||
} satisfies TopicDefinition<WorkbenchSessionDetail, SessionTopicParams, SessionEvent>,
|
||||
|
||||
sandboxProcesses: {
|
||||
key: (p) => `sandbox:${p.workspaceId}:${p.sandboxId}`,
|
||||
key: (p) => `sandbox:${p.organizationId}:${p.sandboxId}`,
|
||||
event: "processesUpdated",
|
||||
connect: (b, p) => b.connectSandbox(p.workspaceId, p.providerId, p.sandboxId),
|
||||
fetchInitial: (b, p) => b.listSandboxProcesses(p.workspaceId, p.providerId, p.sandboxId),
|
||||
connect: (b, p) => b.connectSandbox(p.organizationId, p.providerId, p.sandboxId),
|
||||
fetchInitial: (b, p) => b.listSandboxProcesses(p.organizationId, p.providerId, p.sandboxId),
|
||||
applyEvent: (_current, event: SandboxProcessesEvent) => event.processes,
|
||||
} satisfies TopicDefinition<SandboxProcessRecord[], SandboxProcessesTopicParams, SandboxProcessesEvent>,
|
||||
} as const;
|
||||
|
|
@ -413,16 +413,16 @@ export type TopicParams<K extends TopicKey> = Parameters<(typeof topicDefinition
|
|||
export type TopicData<K extends TopicKey> = Awaited<ReturnType<(typeof topicDefinitions)[K]["fetchInitial"]>>;
|
||||
```
|
||||
|
||||
### 3.2 Interest manager interface
|
||||
### 3.2 Subscription manager interface
|
||||
|
||||
**File:** `packages/client/src/interest/manager.ts` (new)
|
||||
|
||||
```typescript
|
||||
/**
|
||||
* The InterestManager owns all realtime actor connections and cached state.
|
||||
* The SubscriptionManager owns all realtime actor connections and cached state.
|
||||
*
|
||||
* Architecture:
|
||||
* - Each topic (app, workspace, task, session, sandboxProcesses) maps to an actor + event.
|
||||
* - Each topic (app, organization, task, session, sandboxProcesses) maps to an actor + event.
|
||||
* - On first subscription, the manager opens a WebSocket connection, fetches initial state,
|
||||
* and listens for events. Events carry full replacement payloads for the changed entity.
|
||||
* - Multiple subscribers to the same topic share one connection and one cached state.
|
||||
|
|
@ -430,7 +430,7 @@ export type TopicData<K extends TopicKey> = Awaited<ReturnType<(typeof topicDefi
|
|||
* to avoid thrashing during screen navigation or React double-renders.
|
||||
* - The interface is identical for mock and remote implementations.
|
||||
*/
|
||||
export interface InterestManager {
|
||||
export interface SubscriptionManager {
|
||||
/**
|
||||
* Subscribe to a topic. Returns an unsubscribe function.
|
||||
* On first subscriber: opens connection, fetches initial state, starts listening.
|
||||
|
|
@ -472,10 +472,10 @@ export interface TopicState<K extends TopicKey> {
|
|||
const GRACE_PERIOD_MS = 30_000;
|
||||
|
||||
/**
|
||||
* Remote implementation of InterestManager.
|
||||
* Remote implementation of SubscriptionManager.
|
||||
* Manages WebSocket connections to RivetKit actors via BackendClient.
|
||||
*/
|
||||
export class RemoteInterestManager implements InterestManager {
|
||||
export class RemoteSubscriptionManager implements SubscriptionManager {
|
||||
private entries = new Map<string, TopicEntry<any, any, any>>();
|
||||
|
||||
constructor(private backend: BackendClient) {}
|
||||
|
|
@ -634,7 +634,7 @@ class TopicEntry<TData, TParams, TEvent> {
|
|||
|
||||
**File:** `packages/client/src/interest/mock-manager.ts` (new)
|
||||
|
||||
Same `InterestManager` interface. Uses in-memory state. Topic definitions provide mock data. Mutations call `applyEvent` directly on the entry to simulate broadcasts. No WebSocket connections.
|
||||
Same `SubscriptionManager` interface. Uses in-memory state. Topic definitions provide mock data. Mutations call `applyEvent` directly on the entry to simulate broadcasts. No WebSocket connections.
|
||||
|
||||
### 3.5 React hook
|
||||
|
||||
|
|
@ -651,17 +651,17 @@ import { useSyncExternalStore, useMemo } from "react";
|
|||
* - Multiple components subscribing to the same topic share one connection.
|
||||
*
|
||||
* @example
|
||||
* // Subscribe to workspace sidebar data
|
||||
* const workspace = useInterest("workspace", { workspaceId });
|
||||
* // Subscribe to organization sidebar data
|
||||
* const organization = useSubscription("organization", { organizationId });
|
||||
*
|
||||
* // Subscribe to task detail (only when viewing a task)
|
||||
* const task = useInterest("task", selectedTaskId ? { workspaceId, repoId, taskId } : null);
|
||||
* const task = useSubscription("task", selectedTaskId ? { organizationId, repoId, taskId } : null);
|
||||
*
|
||||
* // Subscribe to active session content
|
||||
* const session = useInterest("session", activeSessionId ? { workspaceId, repoId, taskId, sessionId } : null);
|
||||
* const session = useSubscription("session", activeSessionId ? { organizationId, repoId, taskId, sessionId } : null);
|
||||
*/
|
||||
export function useInterest<K extends TopicKey>(
|
||||
manager: InterestManager,
|
||||
export function useSubscription<K extends TopicKey>(
|
||||
manager: SubscriptionManager,
|
||||
topicKey: K,
|
||||
params: TopicParams<K> | null,
|
||||
): TopicState<K> {
|
||||
|
|
@ -698,18 +698,18 @@ Add to the `BackendClient` interface:
|
|||
|
||||
```typescript
|
||||
// New connection methods (return WebSocket-based ActorConn)
|
||||
connectWorkspace(workspaceId: string): Promise<ActorConn>;
|
||||
connectTask(workspaceId: string, repoId: string, taskId: string): Promise<ActorConn>;
|
||||
connectSandbox(workspaceId: string, providerId: string, sandboxId: string): Promise<ActorConn>;
|
||||
connectWorkspace(organizationId: string): Promise<ActorConn>;
|
||||
connectTask(organizationId: string, repoId: string, taskId: string): Promise<ActorConn>;
|
||||
connectSandbox(organizationId: string, providerId: string, sandboxId: string): Promise<ActorConn>;
|
||||
|
||||
// New fetch methods (read from materialized state)
|
||||
getWorkspaceSummary(workspaceId: string): Promise<WorkspaceSummarySnapshot>;
|
||||
getTaskDetail(workspaceId: string, repoId: string, taskId: string): Promise<WorkbenchTaskDetail>;
|
||||
getSessionDetail(workspaceId: string, repoId: string, taskId: string, sessionId: string): Promise<WorkbenchSessionDetail>;
|
||||
getWorkspaceSummary(organizationId: string): Promise<OrganizationSummarySnapshot>;
|
||||
getTaskDetail(organizationId: string, repoId: string, taskId: string): Promise<WorkbenchTaskDetail>;
|
||||
getSessionDetail(organizationId: string, repoId: string, taskId: string, sessionId: string): Promise<WorkbenchSessionDetail>;
|
||||
```
|
||||
|
||||
Remove:
|
||||
- `subscribeWorkbench`, `subscribeApp`, `subscribeSandboxProcesses` (replaced by interest manager)
|
||||
- `subscribeWorkbench`, `subscribeApp`, `subscribeSandboxProcesses` (replaced by subscription manager)
|
||||
- `getWorkbench` (replaced by `getWorkspaceSummary` + `getTaskDetail`)
|
||||
|
||||
---
|
||||
|
|
@ -721,16 +721,16 @@ Remove:
|
|||
**File:** `packages/frontend/src/lib/interest.ts` (new)
|
||||
|
||||
```typescript
|
||||
import { RemoteInterestManager } from "@sandbox-agent/foundry-client";
|
||||
import { RemoteSubscriptionManager } from "@sandbox-agent/foundry-client";
|
||||
import { backendClient } from "./backend";
|
||||
|
||||
export const interestManager = new RemoteInterestManager(backendClient);
|
||||
export const subscriptionManager = new RemoteSubscriptionManager(backendClient);
|
||||
```
|
||||
|
||||
Or for mock mode:
|
||||
```typescript
|
||||
import { MockInterestManager } from "@sandbox-agent/foundry-client";
|
||||
export const interestManager = new MockInterestManager();
|
||||
import { MockSubscriptionManager } from "@sandbox-agent/foundry-client";
|
||||
export const subscriptionManager = new MockSubscriptionManager();
|
||||
```
|
||||
|
||||
### 4.2 Replace MockLayout workbench subscription
|
||||
|
|
@ -739,7 +739,7 @@ export const interestManager = new MockInterestManager();
|
|||
|
||||
Before:
|
||||
```typescript
|
||||
const taskWorkbenchClient = useMemo(() => getTaskWorkbenchClient(workspaceId), [workspaceId]);
|
||||
const taskWorkbenchClient = useMemo(() => getTaskWorkbenchClient(organizationId), [organizationId]);
|
||||
const viewModel = useSyncExternalStore(
|
||||
taskWorkbenchClient.subscribe.bind(taskWorkbenchClient),
|
||||
taskWorkbenchClient.getSnapshot.bind(taskWorkbenchClient),
|
||||
|
|
@ -749,9 +749,9 @@ const tasks = viewModel.tasks ?? [];
|
|||
|
||||
After:
|
||||
```typescript
|
||||
const workspace = useInterest(interestManager, "workspace", { workspaceId });
|
||||
const taskSummaries = workspace.data?.taskSummaries ?? [];
|
||||
const repos = workspace.data?.repos ?? [];
|
||||
const organization = useSubscription(subscriptionManager, "organization", { organizationId });
|
||||
const taskSummaries = organization.data?.taskSummaries ?? [];
|
||||
const repos = organization.data?.repos ?? [];
|
||||
```
|
||||
|
||||
### 4.3 Replace MockLayout task detail
|
||||
|
|
@ -759,8 +759,8 @@ const repos = workspace.data?.repos ?? [];
|
|||
When a task is selected, subscribe to its detail:
|
||||
|
||||
```typescript
|
||||
const taskDetail = useInterest(interestManager, "task",
|
||||
selectedTaskId ? { workspaceId, repoId: activeRepoId, taskId: selectedTaskId } : null
|
||||
const taskDetail = useSubscription(subscriptionManager, "task",
|
||||
selectedTaskId ? { organizationId, repoId: activeRepoId, taskId: selectedTaskId } : null
|
||||
);
|
||||
```
|
||||
|
||||
|
|
@ -769,25 +769,25 @@ const taskDetail = useInterest(interestManager, "task",
|
|||
When a session tab is active:
|
||||
|
||||
```typescript
|
||||
const sessionDetail = useInterest(interestManager, "session",
|
||||
activeSessionId ? { workspaceId, repoId, taskId, sessionId: activeSessionId } : null
|
||||
const sessionDetail = useSubscription(subscriptionManager, "session",
|
||||
activeSessionId ? { organizationId, repoId, taskId, sessionId: activeSessionId } : null
|
||||
);
|
||||
```
|
||||
|
||||
### 4.5 Replace workspace-dashboard.tsx polling
|
||||
### 4.5 Replace organization-dashboard.tsx polling
|
||||
|
||||
Remove ALL `useQuery` with `refetchInterval` in this file:
|
||||
- `tasksQuery` (2.5s polling) → `useInterest("workspace", ...)`
|
||||
- `taskDetailQuery` (2.5s polling) → `useInterest("task", ...)`
|
||||
- `reposQuery` (10s polling) → `useInterest("workspace", ...)`
|
||||
- `repoOverviewQuery` (5s polling) → `useInterest("workspace", ...)`
|
||||
- `sessionsQuery` (3s polling) → `useInterest("task", ...)` (sessionsSummary field)
|
||||
- `eventsQuery` (2.5s polling) → `useInterest("session", ...)`
|
||||
- `tasksQuery` (2.5s polling) → `useSubscription("organization", ...)`
|
||||
- `taskDetailQuery` (2.5s polling) → `useSubscription("task", ...)`
|
||||
- `reposQuery` (10s polling) → `useSubscription("organization", ...)`
|
||||
- `repoOverviewQuery` (5s polling) → `useSubscription("organization", ...)`
|
||||
- `sessionsQuery` (3s polling) → `useSubscription("task", ...)` (sessionsSummary field)
|
||||
- `eventsQuery` (2.5s polling) → `useSubscription("session", ...)`
|
||||
|
||||
### 4.6 Replace terminal-pane.tsx polling
|
||||
|
||||
- `taskQuery` (2s polling) → `useInterest("task", ...)`
|
||||
- `processesQuery` (3s polling) → `useInterest("sandboxProcesses", ...)`
|
||||
- `taskQuery` (2s polling) → `useSubscription("task", ...)`
|
||||
- `processesQuery` (3s polling) → `useSubscription("sandboxProcesses", ...)`
|
||||
- Remove `subscribeSandboxProcesses` useEffect
|
||||
|
||||
### 4.7 Replace app client subscription
|
||||
|
|
@ -804,14 +804,14 @@ export function useMockAppSnapshot(): FoundryAppSnapshot {
|
|||
After:
|
||||
```typescript
|
||||
export function useAppSnapshot(): FoundryAppSnapshot {
|
||||
const app = useInterest(interestManager, "app", {});
|
||||
const app = useSubscription(subscriptionManager, "app", {});
|
||||
return app.data ?? DEFAULT_APP_SNAPSHOT;
|
||||
}
|
||||
```
|
||||
|
||||
### 4.8 Mutations
|
||||
|
||||
Mutations (`createTask`, `renameTask`, `sendMessage`, etc.) no longer need manual `refetch()` or `refresh()` calls after completion. The backend mutation triggers a broadcast, which the interest manager receives and applies automatically.
|
||||
Mutations (`createTask`, `renameTask`, `sendMessage`, etc.) no longer need manual `refetch()` or `refresh()` calls after completion. The backend mutation triggers a broadcast, which the subscription manager receives and applies automatically.
|
||||
|
||||
Before:
|
||||
```typescript
|
||||
|
|
@ -841,24 +841,24 @@ const createSession = useMutation({
|
|||
|
||||
| File/Code | Reason |
|
||||
|---|---|
|
||||
| `packages/client/src/remote/workbench-client.ts` | Replaced by interest manager `workspace` + `task` topics |
|
||||
| `packages/client/src/remote/app-client.ts` | Replaced by interest manager `app` topic |
|
||||
| `packages/client/src/remote/workbench-client.ts` | Replaced by subscription manager `organization` + `task` topics |
|
||||
| `packages/client/src/remote/app-client.ts` | Replaced by subscription manager `app` topic |
|
||||
| `packages/client/src/workbench-client.ts` | Factory for above — no longer needed |
|
||||
| `packages/client/src/app-client.ts` | Factory for above — no longer needed |
|
||||
| `packages/frontend/src/lib/workbench.ts` | Workbench client singleton — replaced by interest manager |
|
||||
| `subscribeWorkbench` in `backend-client.ts` | Replaced by `connectWorkspace` + interest manager |
|
||||
| `subscribeSandboxProcesses` in `backend-client.ts` | Replaced by `connectSandbox` + interest manager |
|
||||
| `subscribeApp` in `backend-client.ts` | Replaced by `connectWorkspace("app")` + interest manager |
|
||||
| `buildWorkbenchSnapshot` in `workspace/actions.ts` | Replaced by `getWorkspaceSummary` (local reads). Keep as `reconcileWorkbenchState` for recovery only. |
|
||||
| `notifyWorkbenchUpdated` in `workspace/actions.ts` | Replaced by `applyTaskSummaryUpdate` + `c.broadcast` with payload |
|
||||
| `packages/frontend/src/lib/workbench.ts` | Workbench client singleton — replaced by subscription manager |
|
||||
| `subscribeWorkbench` in `backend-client.ts` | Replaced by `connectWorkspace` + subscription manager |
|
||||
| `subscribeSandboxProcesses` in `backend-client.ts` | Replaced by `connectSandbox` + subscription manager |
|
||||
| `subscribeApp` in `backend-client.ts` | Replaced by `connectWorkspace("app")` + subscription manager |
|
||||
| `buildWorkbenchSnapshot` in `organization/actions.ts` | Replaced by `getWorkspaceSummary` (local reads). Keep as `reconcileWorkbenchState` for recovery only. |
|
||||
| `notifyWorkbenchUpdated` in `organization/actions.ts` | Replaced by `applyTaskSummaryUpdate` + `c.broadcast` with payload |
|
||||
| `notifyWorkbenchUpdated` in `task/workbench.ts` | Replaced by `broadcastTaskUpdate` helper |
|
||||
| `TaskWorkbenchSnapshot` in `shared/workbench.ts` | Replaced by `WorkspaceSummarySnapshot` + `WorkbenchTaskDetail` |
|
||||
| `TaskWorkbenchSnapshot` in `shared/workbench.ts` | Replaced by `OrganizationSummarySnapshot` + `WorkbenchTaskDetail` |
|
||||
| `WorkbenchTask` in `shared/workbench.ts` | Split into `WorkbenchTaskSummary` + `WorkbenchTaskDetail` |
|
||||
| `getWorkbench` action on workspace actor | Replaced by `getWorkspaceSummary` |
|
||||
| `TaskWorkbenchClient` interface | Replaced by `InterestManager` + `useInterest` hook |
|
||||
| All `useQuery` with `refetchInterval` in `workspace-dashboard.tsx` | Replaced by `useInterest` |
|
||||
| All `useQuery` with `refetchInterval` in `terminal-pane.tsx` | Replaced by `useInterest` |
|
||||
| Mock workbench client (`packages/client/src/mock/workbench-client.ts`) | Replaced by `MockInterestManager` |
|
||||
| `getWorkbench` action on organization actor | Replaced by `getWorkspaceSummary` |
|
||||
| `TaskWorkbenchClient` interface | Replaced by `SubscriptionManager` + `useSubscription` hook |
|
||||
| All `useQuery` with `refetchInterval` in `organization-dashboard.tsx` | Replaced by `useSubscription` |
|
||||
| All `useQuery` with `refetchInterval` in `terminal-pane.tsx` | Replaced by `useSubscription` |
|
||||
| Mock workbench client (`packages/client/src/mock/workbench-client.ts`) | Replaced by `MockSubscriptionManager` |
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -867,27 +867,27 @@ const createSession = useMutation({
|
|||
Implement in this order to keep the system working at each step:
|
||||
|
||||
### Phase 1: Types and backend materialization
|
||||
1. Add new types to `packages/shared` (`WorkbenchTaskSummary`, `WorkbenchTaskDetail`, `WorkbenchSessionSummary`, `WorkbenchSessionDetail`, `WorkspaceSummarySnapshot`, event types).
|
||||
2. Add `taskSummaries` table to workspace actor schema.
|
||||
3. Add `applyTaskSummaryUpdate`, `removeTaskSummary`, `getWorkspaceSummary` actions to workspace actor.
|
||||
1. Add new types to `packages/shared` (`WorkbenchTaskSummary`, `WorkbenchTaskDetail`, `WorkbenchSessionSummary`, `WorkbenchSessionDetail`, `OrganizationSummarySnapshot`, event types).
|
||||
2. Add `taskSummaries` table to organization actor schema.
|
||||
3. Add `applyTaskSummaryUpdate`, `removeTaskSummary`, `getWorkspaceSummary` actions to organization actor.
|
||||
4. Add `getTaskDetail`, `getSessionDetail` actions to task actor.
|
||||
5. Replace all `notifyWorkbenchUpdated` call sites with `broadcastTaskUpdate` that pushes summary + broadcasts detail with payload.
|
||||
6. Change app actor broadcast to include snapshot payload.
|
||||
7. Change sandbox actor broadcast to include process list payload.
|
||||
8. Add one-time reconciliation action to populate `taskSummaries` table from existing task actors (run on startup or on-demand).
|
||||
|
||||
### Phase 2: Client interest manager
|
||||
9. Add `InterestManager` interface, `RemoteInterestManager`, `MockInterestManager` to `packages/client`.
|
||||
### Phase 2: Client subscription manager
|
||||
9. Add `SubscriptionManager` interface, `RemoteSubscriptionManager`, `MockSubscriptionManager` to `packages/client`.
|
||||
10. Add topic definitions registry.
|
||||
11. Add `useInterest` hook.
|
||||
11. Add `useSubscription` hook.
|
||||
12. Add `connectWorkspace`, `connectTask`, `connectSandbox`, `getWorkspaceSummary`, `getTaskDetail`, `getSessionDetail` to `BackendClient`.
|
||||
|
||||
### Phase 3: Frontend migration
|
||||
13. Replace `useMockAppSnapshot` with `useInterest("app", ...)`.
|
||||
14. Replace `MockLayout` workbench subscription with `useInterest("workspace", ...)`.
|
||||
15. Replace task detail view with `useInterest("task", ...)` + `useInterest("session", ...)`.
|
||||
16. Replace `workspace-dashboard.tsx` polling queries with `useInterest`.
|
||||
17. Replace `terminal-pane.tsx` polling queries with `useInterest`.
|
||||
13. Replace `useMockAppSnapshot` with `useSubscription("app", ...)`.
|
||||
14. Replace `MockLayout` workbench subscription with `useSubscription("organization", ...)`.
|
||||
15. Replace task detail view with `useSubscription("task", ...)` + `useSubscription("session", ...)`.
|
||||
16. Replace `organization-dashboard.tsx` polling queries with `useSubscription`.
|
||||
17. Replace `terminal-pane.tsx` polling queries with `useSubscription`.
|
||||
18. Remove manual `refetch()` calls from mutations.
|
||||
|
||||
### Phase 4: Cleanup
|
||||
|
|
@ -902,10 +902,10 @@ Implement in this order to keep the system working at each step:
|
|||
Add doc comments at these locations:
|
||||
|
||||
- **Topic definitions** — explain the materialized state pattern, why events carry full entity state instead of patches, and the relationship between topics.
|
||||
- **`broadcastTaskUpdate` helper** — explain the dual-broadcast pattern (push summary to workspace + broadcast detail to direct subscribers).
|
||||
- **`InterestManager` interface** — explain the grace period, deduplication, and why mock/remote share the same interface.
|
||||
- **`useInterest` hook** — explain `useSyncExternalStore` integration, null params for conditional interest, and how params key stabilization works.
|
||||
- **Workspace actor `taskSummaries` table** — explain this is a materialized read projection maintained by task actor pushes, not a source of truth.
|
||||
- **`broadcastTaskUpdate` helper** — explain the dual-broadcast pattern (push summary to organization + broadcast detail to direct subscribers).
|
||||
- **`SubscriptionManager` interface** — explain the grace period, deduplication, and why mock/remote share the same interface.
|
||||
- **`useSubscription` hook** — explain `useSyncExternalStore` integration, null params for conditional interest, and how params key stabilization works.
|
||||
- **Organization actor `taskSummaries` table** — explain this is a materialized read projection maintained by task actor pushes, not a source of truth.
|
||||
- **`applyTaskSummaryUpdate` action** — explain this is the write path for the materialized projection, called by task actors, not by clients.
|
||||
- **`getWorkspaceSummary` action** — explain this reads from local SQLite only, no fan-out, and why that's the correct pattern.
|
||||
|
||||
|
|
@ -913,7 +913,7 @@ Add doc comments at these locations:
|
|||
|
||||
## 8. Testing
|
||||
|
||||
- Interest manager unit tests: subscribe/unsubscribe lifecycle, grace period, deduplication, event application.
|
||||
- Mock implementation tests: verify same behavior as remote through shared test suite against the `InterestManager` interface.
|
||||
- Subscription manager unit tests: subscribe/unsubscribe lifecycle, grace period, deduplication, event application.
|
||||
- Mock implementation tests: verify same behavior as remote through shared test suite against the `SubscriptionManager` interface.
|
||||
- Backend integration: verify `applyTaskSummaryUpdate` correctly materializes and broadcasts.
|
||||
- E2E: verify that a task mutation (e.g. rename) updates the sidebar in realtime without polling.
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ The goal is not just to make individual endpoints faster. The goal is to move Fo
|
|||
|
||||
### Workbench
|
||||
|
||||
- `getWorkbench` still represents a monolithic workspace read that aggregates repo, project, and task state.
|
||||
- `getWorkbench` still represents a monolithic organization read that aggregates repo, repository, and task state.
|
||||
- The remote workbench store still responds to every event by pulling a full fresh snapshot.
|
||||
- Some task/workbench detail is still too expensive to compute inline and too broad to refresh after every mutation.
|
||||
|
||||
|
|
@ -57,7 +57,7 @@ Requests should not block on provider calls, repo sync, sandbox provisioning, tr
|
|||
### View-model rule
|
||||
|
||||
- App shell view connects to app/session state and only the org actors visible on screen.
|
||||
- Workspace/task-list view connects to a workspace-owned summary projection.
|
||||
- Organization/task-list view connects to a organization-owned summary projection.
|
||||
- Task detail view connects directly to the selected task actor.
|
||||
- Sandbox/session detail connects only when the user opens that detail.
|
||||
|
||||
|
|
@ -99,7 +99,7 @@ The app shell should stop using `/app/snapshot` as the steady-state read model.
|
|||
|
||||
#### Changes
|
||||
|
||||
1. Introduce a small app-shell projection owned by the app workspace actor:
|
||||
1. Introduce a small app-shell projection owned by the app organization actor:
|
||||
- auth status
|
||||
- current user summary
|
||||
- active org id
|
||||
|
|
@ -121,7 +121,7 @@ The app shell should stop using `/app/snapshot` as the steady-state read model.
|
|||
|
||||
#### Likely files
|
||||
|
||||
- `foundry/packages/backend/src/actors/workspace/app-shell.ts`
|
||||
- `foundry/packages/backend/src/actors/organization/app-shell.ts`
|
||||
- `foundry/packages/client/src/backend-client.ts`
|
||||
- `foundry/packages/client/src/remote/app-client.ts`
|
||||
- `foundry/packages/shared/src/app-shell.ts`
|
||||
|
|
@ -133,42 +133,42 @@ The app shell should stop using `/app/snapshot` as the steady-state read model.
|
|||
- Selecting an org returns quickly and the UI updates from actor events.
|
||||
- App shell refresh cost is bounded by visible state, not every eligible organization on every poll.
|
||||
|
||||
### 3. Workspace summary becomes a projection, not a full snapshot
|
||||
### 3. Organization summary becomes a projection, not a full snapshot
|
||||
|
||||
The task list should read a workspace-owned summary projection instead of calling into every task actor on each refresh.
|
||||
The task list should read a organization-owned summary projection instead of calling into every task actor on each refresh.
|
||||
|
||||
#### Changes
|
||||
|
||||
1. Define a durable workspace summary model with only list-screen fields:
|
||||
1. Define a durable organization summary model with only list-screen fields:
|
||||
- repo summary
|
||||
- project summary
|
||||
- repository summary
|
||||
- task summary
|
||||
- selected/open task ids
|
||||
- unread/session status summary
|
||||
- coarse git/PR state summary
|
||||
2. Update workspace actor workflows so task/project changes incrementally update this projection.
|
||||
2. Update organization actor workflows so task/repository changes incrementally update this projection.
|
||||
3. Change `getWorkbench` to return the projection only.
|
||||
4. Change `workbenchUpdated` from "invalidate and refetch everything" to "here is the updated projection version or changed entity ids".
|
||||
5. Remove task-actor fan-out from the default list read path.
|
||||
|
||||
#### Likely files
|
||||
|
||||
- `foundry/packages/backend/src/actors/workspace/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/project/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/organization/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/repository/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/task/index.ts`
|
||||
- `foundry/packages/backend/src/actors/task/workbench.ts`
|
||||
- task/workspace DB schema and migrations
|
||||
- task/organization DB schema and migrations
|
||||
- `foundry/packages/client/src/remote/workbench-client.ts`
|
||||
|
||||
#### Acceptance criteria
|
||||
|
||||
- Workbench list refresh does not call every task actor.
|
||||
- A websocket event does not force a full cross-actor rebuild.
|
||||
- Initial task-list load time scales roughly with workspace summary size, not repo count times task count times detail reads.
|
||||
- Initial task-list load time scales roughly with organization summary size, not repo count times task count times detail reads.
|
||||
|
||||
### 4. Task detail moves to direct actor reads and events
|
||||
|
||||
Heavy task detail should move out of the workspace summary and into the selected task actor.
|
||||
Heavy task detail should move out of the organization summary and into the selected task actor.
|
||||
|
||||
#### Changes
|
||||
|
||||
|
|
@ -258,7 +258,7 @@ Do not delete bootstrap endpoints first. Shrink them after the subscription mode
|
|||
4. `06-daytona-provisioning-staged-background-flow.md`
|
||||
5. App shell realtime subscription model
|
||||
6. `02-repo-overview-from-cached-projection.md`
|
||||
7. Workspace summary projection
|
||||
7. Organization summary projection
|
||||
8. `04-workbench-session-creation-without-inline-provisioning.md`
|
||||
9. `05-workbench-snapshot-from-derived-state.md`
|
||||
10. Task-detail direct actor reads/subscriptions
|
||||
|
|
@ -270,7 +270,7 @@ Do not delete bootstrap endpoints first. Shrink them after the subscription mode
|
|||
- Runtime hardening removes the most dangerous correctness bug before more UI load shifts onto actor connections.
|
||||
- The first async workflow items reduce the biggest user-visible stalls quickly.
|
||||
- App shell realtime is smaller and lower-risk than the workbench migration, and it removes the current polling loop.
|
||||
- Workspace summary and task-detail split should happen after the async workflow moves so the projection model does not encode old synchronous assumptions.
|
||||
- Organization summary and task-detail split should happen after the async workflow moves so the projection model does not encode old synchronous assumptions.
|
||||
- Auth simplification is valuable but not required to remove the current refresh/polling/runtime problems.
|
||||
|
||||
## Observability Requirements
|
||||
|
|
@ -291,7 +291,7 @@ Each log line should include a request id or actor/event correlation id where po
|
|||
|
||||
1. Ship runtime hardening and observability first.
|
||||
2. Ship app-shell realtime behind a client flag while keeping snapshot bootstrap.
|
||||
3. Ship workspace summary projection behind a separate flag.
|
||||
3. Ship organization summary projection behind a separate flag.
|
||||
4. Migrate one heavy detail pane at a time off the monolithic workbench payload.
|
||||
5. Remove polling once the matching event path is proven stable.
|
||||
6. Only then remove or demote the old snapshot-heavy steady-state flows.
|
||||
|
|
|
|||
|
|
@ -10,8 +10,8 @@ That makes a user-facing action depend on queue-backed and provider-backed work
|
|||
|
||||
## Current Code Context
|
||||
|
||||
- Workspace entry point: `foundry/packages/backend/src/actors/workspace/actions.ts`
|
||||
- Project task creation path: `foundry/packages/backend/src/actors/project/actions.ts`
|
||||
- Organization entry point: `foundry/packages/backend/src/actors/organization/actions.ts`
|
||||
- Repository task creation path: `foundry/packages/backend/src/actors/repository/actions.ts`
|
||||
- Task action surface: `foundry/packages/backend/src/actors/task/index.ts`
|
||||
- Task workflow: `foundry/packages/backend/src/actors/task/workflow/index.ts`
|
||||
- Task init/provision steps: `foundry/packages/backend/src/actors/task/workflow/init.ts`
|
||||
|
|
@ -33,8 +33,8 @@ That makes a user-facing action depend on queue-backed and provider-backed work
|
|||
- persisting any immediately-known metadata
|
||||
- returning the current task record
|
||||
3. After initialize completes, enqueue `task.command.provision` with `wait: false`.
|
||||
4. Change `workspace.createTask` to:
|
||||
- create or resolve the project
|
||||
4. Change `organization.createTask` to:
|
||||
- create or resolve the repository
|
||||
- create the task actor
|
||||
- call `task.initialize(...)`
|
||||
- stop awaiting `task.provision(...)`
|
||||
|
|
@ -51,12 +51,12 @@ That makes a user-facing action depend on queue-backed and provider-backed work
|
|||
|
||||
## Files Likely To Change
|
||||
|
||||
- `foundry/packages/backend/src/actors/workspace/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/project/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/organization/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/repository/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/task/index.ts`
|
||||
- `foundry/packages/backend/src/actors/task/workflow/index.ts`
|
||||
- `foundry/packages/backend/src/actors/task/workflow/init.ts`
|
||||
- `foundry/packages/frontend/src/components/workspace-dashboard.tsx`
|
||||
- `foundry/packages/frontend/src/components/organization-dashboard.tsx`
|
||||
- `foundry/packages/client/src/remote/workbench-client.ts`
|
||||
|
||||
## Client Impact
|
||||
|
|
|
|||
|
|
@ -15,11 +15,11 @@ The frontend polls repo overview repeatedly, so this design multiplies slow work
|
|||
|
||||
## Current Code Context
|
||||
|
||||
- Workspace overview entry point: `foundry/packages/backend/src/actors/workspace/actions.ts`
|
||||
- Project overview implementation: `foundry/packages/backend/src/actors/project/actions.ts`
|
||||
- Branch sync poller: `foundry/packages/backend/src/actors/project-branch-sync/index.ts`
|
||||
- PR sync poller: `foundry/packages/backend/src/actors/project-pr-sync/index.ts`
|
||||
- Repo overview client polling: `foundry/packages/frontend/src/components/workspace-dashboard.tsx`
|
||||
- Organization overview entry point: `foundry/packages/backend/src/actors/organization/actions.ts`
|
||||
- Repository overview implementation: `foundry/packages/backend/src/actors/repository/actions.ts`
|
||||
- Branch sync poller: `foundry/packages/backend/src/actors/repository-branch-sync/index.ts`
|
||||
- PR sync poller: `foundry/packages/backend/src/actors/repository-pr-sync/index.ts`
|
||||
- Repo overview client polling: `foundry/packages/frontend/src/components/organization-dashboard.tsx`
|
||||
|
||||
## Target Contract
|
||||
|
||||
|
|
@ -30,27 +30,27 @@ The frontend polls repo overview repeatedly, so this design multiplies slow work
|
|||
## Proposed Fix
|
||||
|
||||
1. Remove inline `forceProjectSync()` from `getRepoOverview`.
|
||||
2. Add freshness fields to the project projection, for example:
|
||||
2. Add freshness fields to the repository projection, for example:
|
||||
- `branchSyncAt`
|
||||
- `prSyncAt`
|
||||
- `branchSyncStatus`
|
||||
- `prSyncStatus`
|
||||
3. Let the existing polling actors own cache refresh.
|
||||
4. If the client needs a manual refresh, add a non-blocking command such as `project.requestOverviewRefresh` that:
|
||||
4. If the client needs a manual refresh, add a non-blocking command such as `repository.requestOverviewRefresh` that:
|
||||
- enqueues refresh work
|
||||
- updates sync status to `queued` or `running`
|
||||
- returns immediately
|
||||
5. Keep `getRepoOverview` as a pure read over project SQLite state.
|
||||
5. Keep `getRepoOverview` as a pure read over repository SQLite state.
|
||||
|
||||
## Files Likely To Change
|
||||
|
||||
- `foundry/packages/backend/src/actors/workspace/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/project/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/project/db/schema.ts`
|
||||
- `foundry/packages/backend/src/actors/project/db/migrations.ts`
|
||||
- `foundry/packages/backend/src/actors/project-branch-sync/index.ts`
|
||||
- `foundry/packages/backend/src/actors/project-pr-sync/index.ts`
|
||||
- `foundry/packages/frontend/src/components/workspace-dashboard.tsx`
|
||||
- `foundry/packages/backend/src/actors/organization/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/repository/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/repository/db/schema.ts`
|
||||
- `foundry/packages/backend/src/actors/repository/db/migrations.ts`
|
||||
- `foundry/packages/backend/src/actors/repository-branch-sync/index.ts`
|
||||
- `foundry/packages/backend/src/actors/repository-pr-sync/index.ts`
|
||||
- `foundry/packages/frontend/src/components/organization-dashboard.tsx`
|
||||
|
||||
## Client Impact
|
||||
|
||||
|
|
|
|||
|
|
@ -10,20 +10,20 @@ These flows depend on repo/network state and can take minutes. They should not h
|
|||
|
||||
## Current Code Context
|
||||
|
||||
- Workspace repo action entry point: `foundry/packages/backend/src/actors/workspace/actions.ts`
|
||||
- Project repo action implementation: `foundry/packages/backend/src/actors/project/actions.ts`
|
||||
- Branch/task index state lives in the project actor SQLite DB.
|
||||
- Organization repo action entry point: `foundry/packages/backend/src/actors/organization/actions.ts`
|
||||
- Repository repo action implementation: `foundry/packages/backend/src/actors/repository/actions.ts`
|
||||
- Branch/task index state lives in the repository actor SQLite DB.
|
||||
- Current forced sync uses the PR and branch polling actors before and after the action.
|
||||
|
||||
## Target Contract
|
||||
|
||||
- Repo-affecting actions are accepted quickly and run in the background.
|
||||
- The project actor owns a durable action record with progress and final result.
|
||||
- Clients observe status via project/task state instead of waiting for a single response.
|
||||
- The repository actor owns a durable action record with progress and final result.
|
||||
- Clients observe status via repository/task state instead of waiting for a single response.
|
||||
|
||||
## Proposed Fix
|
||||
|
||||
1. Introduce a project-level workflow/job model for repo actions, for example:
|
||||
1. Introduce a repository-level workflow/job model for repo actions, for example:
|
||||
- `sync_repo`
|
||||
- `restack_repo`
|
||||
- `restack_subtree`
|
||||
|
|
@ -49,11 +49,11 @@ These flows depend on repo/network state and can take minutes. They should not h
|
|||
|
||||
## Files Likely To Change
|
||||
|
||||
- `foundry/packages/backend/src/actors/workspace/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/project/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/project/db/schema.ts`
|
||||
- `foundry/packages/backend/src/actors/project/db/migrations.ts`
|
||||
- `foundry/packages/frontend/src/components/workspace-dashboard.tsx`
|
||||
- `foundry/packages/backend/src/actors/organization/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/repository/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/repository/db/schema.ts`
|
||||
- `foundry/packages/backend/src/actors/repository/db/migrations.ts`
|
||||
- `foundry/packages/frontend/src/components/organization-dashboard.tsx`
|
||||
- Any shared types in `foundry/packages/shared/src`
|
||||
|
||||
## Client Impact
|
||||
|
|
@ -70,5 +70,5 @@ These flows depend on repo/network state and can take minutes. They should not h
|
|||
## Implementation Notes
|
||||
|
||||
- Keep validation cheap in the request path; expensive repo inspection belongs in the workflow.
|
||||
- If job rows are added, decide whether they are project-owned only or also mirrored into history events for UI consumption.
|
||||
- If job rows are added, decide whether they are repository-owned only or also mirrored into history events for UI consumption.
|
||||
- Fresh-agent check: branch-backed task creation and explicit repo stack actions should use the same background job/status vocabulary where possible.
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Creating a workbench tab currently provisions the whole task if no active sandbo
|
|||
|
||||
## Current Code Context
|
||||
|
||||
- Workspace workbench action entry point: `foundry/packages/backend/src/actors/workspace/actions.ts`
|
||||
- Organization workbench action entry point: `foundry/packages/backend/src/actors/organization/actions.ts`
|
||||
- Task workbench behavior: `foundry/packages/backend/src/actors/task/workbench.ts`
|
||||
- Task provision action: `foundry/packages/backend/src/actors/task/index.ts`
|
||||
- Sandbox session creation path: `foundry/packages/backend/src/actors/sandbox-instance/index.ts`
|
||||
|
|
@ -36,7 +36,7 @@ Creating a workbench tab currently provisions the whole task if no active sandbo
|
|||
|
||||
## Files Likely To Change
|
||||
|
||||
- `foundry/packages/backend/src/actors/workspace/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/organization/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/task/workbench.ts`
|
||||
- `foundry/packages/backend/src/actors/task/index.ts`
|
||||
- `foundry/packages/backend/src/actors/task/db/schema.ts`
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ The remote workbench client refreshes after each action and on update events, so
|
|||
|
||||
## Current Code Context
|
||||
|
||||
- Workspace workbench snapshot builder: `foundry/packages/backend/src/actors/workspace/actions.ts`
|
||||
- Organization workbench snapshot builder: `foundry/packages/backend/src/actors/organization/actions.ts`
|
||||
- Task workbench snapshot builder: `foundry/packages/backend/src/actors/task/workbench.ts`
|
||||
- Sandbox session event persistence: `foundry/packages/backend/src/actors/sandbox-instance/persist.ts`
|
||||
- Remote workbench client refresh loop: `foundry/packages/client/src/remote/workbench-client.ts`
|
||||
|
|
@ -43,7 +43,7 @@ The remote workbench client refreshes after each action and on update events, so
|
|||
|
||||
## Files Likely To Change
|
||||
|
||||
- `foundry/packages/backend/src/actors/workspace/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/organization/actions.ts`
|
||||
- `foundry/packages/backend/src/actors/task/workbench.ts`
|
||||
- `foundry/packages/backend/src/actors/task/db/schema.ts`
|
||||
- `foundry/packages/backend/src/actors/task/db/migrations.ts`
|
||||
|
|
|
|||
|
|
@ -17,8 +17,8 @@ Authentication and user identity are conflated into a single `appSessions` table
|
|||
## Current Code Context
|
||||
|
||||
- Custom OAuth flow: `foundry/packages/backend/src/services/app-github.ts` (`buildAuthorizeUrl`, `exchangeCode`, `getViewer`)
|
||||
- Session + identity management: `foundry/packages/backend/src/actors/workspace/app-shell.ts` (`ensureAppSession`, `updateAppSession`, `initGithubSession`, `syncGithubOrganizations`)
|
||||
- Session schema: `foundry/packages/backend/src/actors/workspace/db/schema.ts` (`appSessions` table)
|
||||
- Session + identity management: `foundry/packages/backend/src/actors/organization/app-shell.ts` (`ensureAppSession`, `updateAppSession`, `initGithubSession`, `syncGithubOrganizations`)
|
||||
- Session schema: `foundry/packages/backend/src/actors/organization/db/schema.ts` (`appSessions` table)
|
||||
- Shared types: `foundry/packages/shared/src/app-shell.ts` (`FoundryUser`, `FoundryAppSnapshot`)
|
||||
- HTTP routes: `foundry/packages/backend/src/index.ts` (`resolveSessionId`, `/v1/auth/github/*`, all `/v1/app/*` routes)
|
||||
- Frontend session persistence: `foundry/packages/client/src/backend-client.ts` (`persistAppSessionId`, `x-foundry-session` header, `foundrySession` URL param extraction)
|
||||
|
|
@ -41,7 +41,7 @@ Authentication and user identity are conflated into a single `appSessions` table
|
|||
- BetterAuth uses a custom adapter that routes all DB operations through RivetKit actors.
|
||||
- Each user has their own actor. BetterAuth's `user`, `session`, and `account` tables live in the per-user actor's SQLite via `c.db`.
|
||||
- The adapter resolves which actor to target based on the primary key BetterAuth passes for each operation (user ID, session ID, account ID).
|
||||
- A lightweight **session index** on the app-shell workspace actor maps session tokens → user actor identity, so inbound requests can be routed to the correct user actor without knowing the user ID upfront.
|
||||
- A lightweight **session index** on the app-shell organization actor maps session tokens → user actor identity, so inbound requests can be routed to the correct user actor without knowing the user ID upfront.
|
||||
|
||||
### Canonical user record
|
||||
|
||||
|
|
@ -70,9 +70,9 @@ BetterAuth expects a single database. Foundry uses per-actor SQLite — each act
|
|||
|
||||
When an HTTP request arrives, the backend has a session token but doesn't know the user ID yet. BetterAuth calls adapter methods like `findSession(sessionId)` to resolve this. But which actor holds that session row?
|
||||
|
||||
**Solution: session index on the app-shell workspace actor.**
|
||||
**Solution: session index on the app-shell organization actor.**
|
||||
|
||||
The app-shell workspace actor (which already handles auth routing) maintains a lightweight index table:
|
||||
The app-shell organization actor (which already handles auth routing) maintains a lightweight index table:
|
||||
|
||||
```
|
||||
sessionIndex
|
||||
|
|
@ -83,7 +83,7 @@ sessionIndex
|
|||
|
||||
The adapter flow for session lookup:
|
||||
1. BetterAuth calls `findSession(sessionId)`.
|
||||
2. Adapter queries `sessionIndex` on the workspace actor to resolve `userActorKey`.
|
||||
2. Adapter queries `sessionIndex` on the organization actor to resolve `userActorKey`.
|
||||
3. Adapter gets the user actor handle and queries BetterAuth's `session` table in that actor's `c.db`.
|
||||
|
||||
The adapter flow for user creation (OAuth callback):
|
||||
|
|
@ -91,12 +91,12 @@ The adapter flow for user creation (OAuth callback):
|
|||
2. Adapter resolves the GitHub numeric ID from the user data.
|
||||
3. Adapter creates/gets the user actor keyed by GitHub ID.
|
||||
4. Adapter inserts into BetterAuth's `user` table in that actor's `c.db`.
|
||||
5. When `createSession` follows, adapter writes to the user actor's `session` table AND inserts into the workspace actor's `sessionIndex`.
|
||||
5. When `createSession` follows, adapter writes to the user actor's `session` table AND inserts into the organization actor's `sessionIndex`.
|
||||
|
||||
### User actor shape
|
||||
|
||||
```text
|
||||
UserActor (key: ["ws", workspaceId, "user", githubNumericId])
|
||||
UserActor (key: ["ws", organizationId, "user", githubNumericId])
|
||||
├── BetterAuth tables: user, session, account (managed by BetterAuth schema)
|
||||
├── userProfiles (app-specific: eligibleOrganizationIds, starterRepoStatus, roleLabel)
|
||||
└── sessionState (app-specific: activeOrganizationId per session)
|
||||
|
|
@ -127,15 +127,15 @@ The adapter must inspect `model` and `where` to determine the target actor:
|
|||
| Model | Routing strategy |
|
||||
|-------|-----------------|
|
||||
| `user` (by id) | User actor key derived directly from user ID |
|
||||
| `user` (by email) | `emailIndex` on workspace actor → user actor key |
|
||||
| `session` (by token) | `sessionIndex` on workspace actor → user actor key |
|
||||
| `session` (by id) | `sessionIndex` on workspace actor → user actor key |
|
||||
| `user` (by email) | `emailIndex` on organization actor → user actor key |
|
||||
| `session` (by token) | `sessionIndex` on organization actor → user actor key |
|
||||
| `session` (by id) | `sessionIndex` on organization actor → user actor key |
|
||||
| `session` (by userId) | User actor key derived directly from userId |
|
||||
| `account` | Always has `userId` in where or data → user actor key |
|
||||
| `verification` | Workspace actor (not user-scoped — used for email verification, password reset) |
|
||||
| `verification` | Organization actor (not user-scoped — used for email verification, password reset) |
|
||||
|
||||
On `create` for `session` model: write to user actor's `session` table AND insert into workspace actor's `sessionIndex`.
|
||||
On `delete` for `session` model: delete from user actor's `session` table AND remove from workspace actor's `sessionIndex`.
|
||||
On `create` for `session` model: write to user actor's `session` table AND insert into organization actor's `sessionIndex`.
|
||||
On `delete` for `session` model: delete from user actor's `session` table AND remove from organization actor's `sessionIndex`.
|
||||
|
||||
#### Adapter construction
|
||||
|
||||
|
|
@ -188,14 +188,14 @@ session: {
|
|||
|
||||
#### BetterAuth core tables
|
||||
|
||||
Four tables, all in the per-user actor's SQLite (except `verification` which goes on workspace actor):
|
||||
Four tables, all in the per-user actor's SQLite (except `verification` which goes on organization actor):
|
||||
|
||||
**`user`**: `id`, `name`, `email`, `emailVerified`, `image`, `createdAt`, `updatedAt`
|
||||
**`session`**: `id`, `token`, `userId`, `expiresAt`, `ipAddress?`, `userAgent?`, `createdAt`, `updatedAt`
|
||||
**`account`**: `id`, `userId`, `accountId` (GitHub numeric ID), `providerId` ("github"), `accessToken?`, `refreshToken?`, `scope?`, `createdAt`, `updatedAt`
|
||||
**`verification`**: `id`, `identifier`, `value`, `expiresAt`, `createdAt`, `updatedAt`
|
||||
|
||||
For `findUserByEmail`, a secondary index (email → user actor key) is needed on the workspace actor alongside `sessionIndex`.
|
||||
For `findUserByEmail`, a secondary index (email → user actor key) is needed on the organization actor alongside `sessionIndex`.
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
|
|
@ -210,12 +210,12 @@ Research confirms:
|
|||
|
||||
1. **Prototype the adapter + user actor end-to-end** — wire up `createAdapterFactory` with a minimal actor-routed implementation. Confirm that BetterAuth's GitHub OAuth flow completes successfully with user/session/account records landing in the correct per-user actor's SQLite.
|
||||
2. **Verify `findOne` for session model** — confirm the `where` clause BetterAuth passes for session lookup includes the `token` field (not just `id`), so the adapter can route via `sessionIndex` keyed by token.
|
||||
3. **Measure cookie-cached vs uncached request latency** — confirm that with cookie caching enabled, the adapter is not called on every request, and that the uncached fallback (workspace actor index → user actor → session table) is acceptable.
|
||||
3. **Measure cookie-cached vs uncached request latency** — confirm that with cookie caching enabled, the adapter is not called on every request, and that the uncached fallback (organization actor index → user actor → session table) is acceptable.
|
||||
|
||||
### Phase 1: User actor + adapter infrastructure (no behavior change)
|
||||
|
||||
1. **Install `better-auth` package** in `packages/backend`.
|
||||
2. **Define `UserActor`** with actor key `["ws", workspaceId, "user", githubNumericId]`. Include BetterAuth's required tables (`user`, `session`, `account`) plus app-specific tables in its schema.
|
||||
2. **Define `UserActor`** with actor key `["ws", organizationId, "user", githubNumericId]`. Include BetterAuth's required tables (`user`, `session`, `account`) plus app-specific tables in its schema.
|
||||
3. **Create `userProfiles` table** in user actor schema:
|
||||
```
|
||||
userProfiles
|
||||
|
|
@ -237,7 +237,7 @@ Research confirms:
|
|||
├── createdAt (integer)
|
||||
├── updatedAt (integer)
|
||||
```
|
||||
5. **Create `sessionIndex` and `emailIndex` tables** on the app-shell workspace actor:
|
||||
5. **Create `sessionIndex` and `emailIndex` tables** on the app-shell organization actor:
|
||||
```
|
||||
sessionIndex
|
||||
├── sessionId (text, PK)
|
||||
|
|
@ -256,7 +256,7 @@ Research confirms:
|
|||
### Phase 2: Migrate OAuth flow to BetterAuth
|
||||
|
||||
1. **Replace `startAppGithubAuth`** — delegate to BetterAuth's GitHub OAuth initiation instead of hand-rolling `buildAuthorizeUrl` + `oauthState` + `oauthStateExpiresAt`.
|
||||
2. **Replace `completeAppGithubAuth`** — delegate to BetterAuth's callback handler. BetterAuth creates/updates the user record in the user actor and creates a signed session. The adapter writes to `sessionIndex` on the workspace actor.
|
||||
2. **Replace `completeAppGithubAuth`** — delegate to BetterAuth's callback handler. BetterAuth creates/updates the user record in the user actor and creates a signed session. The adapter writes to `sessionIndex` on the organization actor.
|
||||
3. **After BetterAuth callback completes**, populate `userProfiles` in the user actor with app-specific fields and enqueue the slow org sync (same background workflow pattern as today).
|
||||
4. **Replace `signOutApp`** — delegate to BetterAuth session invalidation. Adapter removes entry from `sessionIndex`.
|
||||
5. **Update `resolveSessionId`** in `index.ts` — validate the session via BetterAuth (which routes through the adapter → `sessionIndex` → user actor). BetterAuth verifies the signature and checks expiration.
|
||||
|
|
@ -288,18 +288,18 @@ Research confirms:
|
|||
## Constraints
|
||||
|
||||
- **Actor-routed adapter.** BetterAuth does not natively support per-user actor databases. The custom adapter must route every DB operation to the correct actor. This adds a layer of indirection and latency (actor handle resolution + message) on adapter calls.
|
||||
- **Session index cost is mitigated by cookie caching.** With `cookieCache` enabled, BetterAuth validates sessions from a signed cookie on most requests — the adapter (and thus the `sessionIndex` lookup + user actor round-trip) is only called when the cache expires or on writes. Without caching, every authenticated request would hit the workspace actor's `sessionIndex` table then the user actor.
|
||||
- **Two-actor write on session create/destroy.** Creating or destroying a session requires writing to both the user actor (BetterAuth's `session` table) and the workspace actor (`sessionIndex`). These must be consistent — if the user actor write succeeds but the index write fails, the session exists but is unreachable.
|
||||
- **Session index cost is mitigated by cookie caching.** With `cookieCache` enabled, BetterAuth validates sessions from a signed cookie on most requests — the adapter (and thus the `sessionIndex` lookup + user actor round-trip) is only called when the cache expires or on writes. Without caching, every authenticated request would hit the organization actor's `sessionIndex` table then the user actor.
|
||||
- **Two-actor write on session create/destroy.** Creating or destroying a session requires writing to both the user actor (BetterAuth's `session` table) and the organization actor (`sessionIndex`). These must be consistent — if the user actor write succeeds but the index write fails, the session exists but is unreachable.
|
||||
- **Background org sync pattern must be preserved.** The fast-path/slow-path split (`initGithubSession` returns immediately, `syncGithubOrganizations` runs in workflow queue) is critical for avoiding proxy timeout retries. BetterAuth handles the OAuth exchange, but the org sync stays as a background workflow.
|
||||
- **`GitHubAppClient` is still needed.** BetterAuth replaces the OAuth user-auth flow, but installation tokens, webhook verification, repo listing, and org listing are GitHub App operations that BetterAuth does not cover.
|
||||
- **User ID migration.** Changing user IDs from `user-${slugify(login)}` to GitHub numeric IDs affects `organizationMembers`, `seatAssignments`, and any cross-actor references to user IDs. Existing data needs a migration path.
|
||||
- **`findUserByEmail` requires a secondary index.** BetterAuth sometimes looks up users by email (e.g., account linking). An `emailIndex` table on the workspace actor is needed. This must be kept in sync with the user actor's email field.
|
||||
- **`findUserByEmail` requires a secondary index.** BetterAuth sometimes looks up users by email (e.g., account linking). An `emailIndex` table on the organization actor is needed. This must be kept in sync with the user actor's email field.
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
- **Adapter call context — RESOLVED.** Research confirms BetterAuth adapter methods are plain async functions with no request context dependency. The adapter closes over the RivetKit registry at init time and resolves actor handles on demand. No ambient `c` context needed.
|
||||
- **Hot-path latency — MITIGATED.** Cookie caching (`cookieCache` with `strategy: "compact"`) means most authenticated requests validate the session from a signed cookie without calling the adapter at all. The adapter (and thus the actor round-trip) is only hit when the cache expires (configurable, e.g., every 5 minutes) or on writes. This makes the session index + user actor lookup acceptable.
|
||||
- **Two-actor consistency.** Session create/destroy touches two actors (user actor + workspace index). If either write fails, the system is in an inconsistent state. Recommended: write index first, then user actor. A dangling index entry pointing to a nonexistent session is benign — BetterAuth treats it as "session not found" and the user just re-authenticates.
|
||||
- **Two-actor consistency.** Session create/destroy touches two actors (user actor + organization index). If either write fails, the system is in an inconsistent state. Recommended: write index first, then user actor. A dangling index entry pointing to a nonexistent session is benign — BetterAuth treats it as "session not found" and the user just re-authenticates.
|
||||
- **Cookie vs header auth.** BetterAuth defaults to HTTP-only cookies (`better-auth.session_token`). The current system uses a custom `x-foundry-session` header with `localStorage`. BetterAuth supports `bearer` token mode for programmatic clients via its `bearer` plugin. Enable both for browser + API access.
|
||||
- **Dev bootstrap flow.** `bootstrapAppGithubSession` bypasses the normal OAuth flow for local development. BetterAuth supports programmatic session creation via its internal adapter — the dev path can call the adapter's `create` method directly for the `session` and `account` models.
|
||||
- **Actor lifecycle for users.** User actors are long-lived but low-traffic. RivetKit will idle/unload them. With cookie caching, cold-start only happens when the cache expires — not on every request. Acceptable.
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ The governing policy now lives in `foundry/CLAUDE.md`:
|
|||
- Backend actor entry points live under `foundry/packages/backend/src/actors`.
|
||||
- Provider-backed long-running work lives under `foundry/packages/backend/src/providers`.
|
||||
- The main UI consumers are:
|
||||
- `foundry/packages/frontend/src/components/workspace-dashboard.tsx`
|
||||
- `foundry/packages/frontend/src/components/organization-dashboard.tsx`
|
||||
- `foundry/packages/frontend/src/components/mock-layout.tsx`
|
||||
- `foundry/packages/client/src/remote/workbench-client.ts`
|
||||
- Existing non-blocking examples already exist in app-shell GitHub auth/import flows. Use those as the reference pattern for request returns plus background completion.
|
||||
|
|
@ -32,7 +32,7 @@ The governing policy now lives in `foundry/CLAUDE.md`:
|
|||
4. `06-daytona-provisioning-staged-background-flow.md`
|
||||
5. App shell realtime subscription work from `00-end-to-end-async-realtime-plan.md`
|
||||
6. `02-repo-overview-from-cached-projection.md`
|
||||
7. Workspace summary projection work from `00-end-to-end-async-realtime-plan.md`
|
||||
7. Organization summary projection work from `00-end-to-end-async-realtime-plan.md`
|
||||
8. `04-workbench-session-creation-without-inline-provisioning.md`
|
||||
9. `05-workbench-snapshot-from-derived-state.md`
|
||||
10. Task-detail direct subscription work from `00-end-to-end-async-realtime-plan.md`
|
||||
|
|
@ -42,7 +42,7 @@ The governing policy now lives in `foundry/CLAUDE.md`:
|
|||
|
||||
- Runtime hardening and the first async workflow items remove the highest-risk correctness and timeout issues first.
|
||||
- App shell realtime is a smaller migration than the workbench and removes the current polling loop early.
|
||||
- Workspace summary and task-detail subscription work are easier once long-running mutations already report durable background state.
|
||||
- Organization summary and task-detail subscription work are easier once long-running mutations already report durable background state.
|
||||
- Auth simplification is important, but it should not block the snapshot/polling/runtime fixes.
|
||||
|
||||
## Fresh Agent Checklist
|
||||
|
|
|
|||
|
|
@ -24,8 +24,8 @@ be thorough and careful with your impelmentaiton. this is going to be the ground
|
|||
- left sidebar is similar to the hf switch ui:
|
||||
- list each repo
|
||||
- under each repo, show all of the tasks
|
||||
- you should see all tasks for the entire workspace here grouped by repo
|
||||
- the main content area shows the current workspace
|
||||
- you should see all tasks for the entire organization here grouped by repo
|
||||
- the main content area shows the current organization
|
||||
- there is a main agent session for the main agent thatn's making the change, so show this by default
|
||||
- build a ui for interacting with sessions
|
||||
- see ~/sandbox-agent/frontend/packages/inspector/ for reference ui
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
Replace the per-repo polling PR sync actor (`ProjectPrSyncActor`) and per-repo PR cache (`prCache` table) with a single organization-scoped `github-state` actor that owns all GitHub data (repos, PRs, members). All GitHub state updates flow exclusively through webhooks, with a one-shot full sync on initial connection. Manual reload actions are exposed per-entity (org, repo, PR) for recovery from missed webhooks.
|
||||
|
||||
Open PRs are surfaced in the left sidebar alongside tasks via a unified workspace interest topic, with lazy task/sandbox creation when a user clicks on a PR.
|
||||
Open PRs are surfaced in the left sidebar alongside tasks via a unified organization subscription topic, with lazy task/sandbox creation when a user clicks on a PR.
|
||||
|
||||
## Reference Implementation
|
||||
|
||||
|
|
@ -18,7 +18,7 @@ Use `git show 0aca2c7:<path>` to read the reference files. Adapt (don't copy bli
|
|||
|
||||
## Constraints
|
||||
|
||||
1. **No polling.** Delete `ProjectPrSyncActor` (`actors/project-pr-sync/`), all references to it in handles/keys/index, and the `prCache` table in `ProjectActor`'s DB schema. Remove `prSyncStatus`/`prSyncAt` from `getRepoOverview`.
|
||||
1. **No polling.** Delete `ProjectPrSyncActor` (`actors/repository-pr-sync/`), all references to it in handles/keys/index, and the `prCache` table in `RepositoryActor`'s DB schema. Remove `prSyncStatus`/`prSyncAt` from `getRepoOverview`.
|
||||
2. **Keep `ProjectBranchSyncActor`.** This polls the local git clone (not GitHub API) and is the sandbox git status mechanism. It stays.
|
||||
3. **Webhooks are the sole live update path.** The only GitHub API calls happen during:
|
||||
- Initial full sync on org connection/installation
|
||||
|
|
@ -72,16 +72,16 @@ Replace the current TODO at `app-shell.ts:1521` with dispatch logic adapted from
|
|||
When `github-state` receives a PR update (webhook or manual reload), it should:
|
||||
|
||||
1. Update its own `github_pull_requests` table
|
||||
2. Call `notifyOrganizationUpdated()` → which broadcasts `workspaceUpdated` to connected clients
|
||||
3. If the PR branch matches an existing task's branch, update that task's `pullRequest` summary in the workspace actor
|
||||
2. Call `notifyOrganizationUpdated()` → which broadcasts `organizationUpdated` to connected clients
|
||||
3. If the PR branch matches an existing task's branch, update that task's `pullRequest` summary in the organization actor
|
||||
|
||||
### Workspace Summary Changes
|
||||
### Organization Summary Changes
|
||||
|
||||
Extend `WorkspaceSummarySnapshot` to include open PRs:
|
||||
Extend `OrganizationSummarySnapshot` to include open PRs:
|
||||
|
||||
```typescript
|
||||
export interface WorkspaceSummarySnapshot {
|
||||
workspaceId: string;
|
||||
export interface OrganizationSummarySnapshot {
|
||||
organizationId: string;
|
||||
repos: WorkbenchRepoSummary[];
|
||||
taskSummaries: WorkbenchTaskSummary[];
|
||||
openPullRequests: WorkbenchOpenPrSummary[]; // NEW
|
||||
|
|
@ -103,13 +103,13 @@ export interface WorkbenchOpenPrSummary {
|
|||
}
|
||||
```
|
||||
|
||||
The workspace actor fetches open PRs from the `github-state` actor when building the summary snapshot. PRs that already have an associated task (matched by branch name) should be excluded from `openPullRequests` (they already appear in `taskSummaries` with their `pullRequest` field populated).
|
||||
The organization actor fetches open PRs from the `github-state` actor when building the summary snapshot. PRs that already have an associated task (matched by branch name) should be excluded from `openPullRequests` (they already appear in `taskSummaries` with their `pullRequest` field populated).
|
||||
|
||||
### Interest Manager
|
||||
|
||||
The `workspace` interest topic already returns `WorkspaceSummarySnapshot`. Adding `openPullRequests` to that type means the sidebar automatically gets PR data without a new topic.
|
||||
The `organization` subscription topic already returns `OrganizationSummarySnapshot`. Adding `openPullRequests` to that type means the sidebar automatically gets PR data without a new topic.
|
||||
|
||||
`workspaceUpdated` events should include a new variant for PR changes:
|
||||
`organizationUpdated` events should include a new variant for PR changes:
|
||||
```typescript
|
||||
{ type: "pullRequestUpdated", pullRequest: WorkbenchOpenPrSummary }
|
||||
{ type: "pullRequestRemoved", prId: string }
|
||||
|
|
@ -117,7 +117,7 @@ The `workspace` interest topic already returns `WorkspaceSummarySnapshot`. Addin
|
|||
|
||||
### Sidebar Changes
|
||||
|
||||
The left sidebar currently renders `projects: ProjectSection[]` where each project has `tasks: Task[]`. Extend this to include open PRs as lightweight entries within each project section:
|
||||
The left sidebar currently renders `repositories: RepositorySection[]` where each repository has `tasks: Task[]`. Extend this to include open PRs as lightweight entries within each repository section:
|
||||
|
||||
- Open PRs appear in the same list as tasks, sorted by `updatedAtMs`
|
||||
- PRs should be visually distinct: show PR icon instead of task indicator, display `#number` and author
|
||||
|
|
@ -134,7 +134,7 @@ Add a "three dots" menu button in the top-right of the sidebar header. Dropdown
|
|||
- **Reload all PRs** — calls `githubState.fullSync({ force: true })` (convenience shortcut)
|
||||
|
||||
For per-repo and per-PR reload, add context menu options:
|
||||
- Right-click a project header → "Reload repository"
|
||||
- Right-click a repository header → "Reload repository"
|
||||
- Right-click a PR entry → "Reload pull request"
|
||||
|
||||
These call the corresponding `reloadRepository`/`reloadPullRequest` actions on the `github-state` actor.
|
||||
|
|
@ -143,27 +143,27 @@ These call the corresponding `reloadRepository`/`reloadPullRequest` actions on t
|
|||
|
||||
Files/code to remove:
|
||||
|
||||
1. `foundry/packages/backend/src/actors/project-pr-sync/` — entire directory
|
||||
2. `foundry/packages/backend/src/actors/project/db/schema.ts` — `prCache` table
|
||||
3. `foundry/packages/backend/src/actors/project/actions.ts` — `applyPrSyncResultMutation`, `getPullRequestForBranch` (moves to github-state), `prSyncStatus`/`prSyncAt` from `getRepoOverview`
|
||||
1. `foundry/packages/backend/src/actors/repository-pr-sync/` — entire directory
|
||||
2. `foundry/packages/backend/src/actors/repository/db/schema.ts` — `prCache` table
|
||||
3. `foundry/packages/backend/src/actors/repository/actions.ts` — `applyPrSyncResultMutation`, `getPullRequestForBranch` (moves to github-state), `prSyncStatus`/`prSyncAt` from `getRepoOverview`
|
||||
4. `foundry/packages/backend/src/actors/handles.ts` — `getOrCreateProjectPrSync`, `selfProjectPrSync`
|
||||
5. `foundry/packages/backend/src/actors/keys.ts` — any PR sync key helper
|
||||
6. `foundry/packages/backend/src/actors/index.ts` — `projectPrSync` import and registration
|
||||
7. All call sites in `ProjectActor` that spawn or call the PR sync actor (`initProject`, `refreshProject`)
|
||||
6. `foundry/packages/backend/src/actors/index.ts` — `repositoryPrSync` import and registration
|
||||
7. All call sites in `RepositoryActor` that spawn or call the PR sync actor (`initProject`, `refreshProject`)
|
||||
|
||||
## Migration Path
|
||||
|
||||
The `prCache` table in `ProjectActor`'s DB can simply be dropped — no data migration needed since the `github-state` actor will re-fetch everything on its first `fullSync`. Existing task `pullRequest` fields are populated from the github-state actor going forward.
|
||||
The `prCache` table in `RepositoryActor`'s DB can simply be dropped — no data migration needed since the `github-state` actor will re-fetch everything on its first `fullSync`. Existing task `pullRequest` fields are populated from the github-state actor going forward.
|
||||
|
||||
## Implementation Order
|
||||
|
||||
1. Create `github-state` actor (adapt from checkpoint `0aca2c7`)
|
||||
2. Wire up actor in registry, handles, keys
|
||||
3. Implement webhook dispatch in app-shell (replace TODO)
|
||||
4. Delete `ProjectPrSyncActor` and `prCache` from project actor
|
||||
4. Delete `ProjectPrSyncActor` and `prCache` from repository actor
|
||||
5. Add manual reload actions to github-state
|
||||
6. Extend `WorkspaceSummarySnapshot` with `openPullRequests`
|
||||
7. Wire through interest manager + workspace events
|
||||
6. Extend `OrganizationSummarySnapshot` with `openPullRequests`
|
||||
7. Wire through subscription manager + organization events
|
||||
8. Update sidebar to render open PRs
|
||||
9. Add three-dots menu with reload options
|
||||
10. Update task creation flow for lazy PR→task conversion
|
||||
|
|
|
|||
|
|
@ -6,19 +6,19 @@ Date: 2026-02-08
|
|||
## Locked Decisions
|
||||
|
||||
1. Entire rewrite is TypeScript. All Rust code will be deleted at cutover.
|
||||
2. Repo stays a single monorepo, managed with `pnpm` workspaces + Turborepo.
|
||||
2. Repo stays a single monorepo, managed with `pnpm` organizations + Turborepo.
|
||||
3. `core` package is renamed to `shared`.
|
||||
4. `integrations` and `providers` live inside the backend package (not top-level packages).
|
||||
5. Rivet-backed state uses SQLite + Drizzle only.
|
||||
6. RivetKit dependencies come from local `../rivet` builds only; no published npm packages.
|
||||
7. Everything is workspace-scoped. Workspace is configurable from CLI.
|
||||
8. `ControlPlaneActor` is renamed to `WorkspaceActor` (workspace coordinator).
|
||||
9. Every actor key is prefixed by workspace.
|
||||
10. `--workspace` is optional; commands resolve workspace via flag -> config default -> `default`.
|
||||
7. Everything is organization-scoped. Organization is configurable from CLI.
|
||||
8. `ControlPlaneActor` is renamed to `OrganizationActor` (organization coordinator).
|
||||
9. Every actor key is prefixed by organization.
|
||||
10. `--organization` is optional; commands resolve organization via flag -> config default -> `default`.
|
||||
11. RivetKit local dependency wiring is `link:`-based.
|
||||
12. Keep the existing config file path (`~/.config/foundry/config.toml`) and evolve keys in place.
|
||||
13. `.agents` and skill files are in scope for migration updates.
|
||||
14. Parent orchestration actors (`workspace`, `project`, `task`) use command-only loops with no timeout.
|
||||
14. Parent orchestration actors (`organization`, `repository`, `task`) use command-only loops with no timeout.
|
||||
15. Periodic syncing/polling runs in dedicated child actors, each with a single timeout cadence.
|
||||
16. For each actor, define the main loop and exactly what data it mutates; keep single-writer ownership strict.
|
||||
|
||||
|
|
@ -38,10 +38,10 @@ The core architecture changes from "worktree-per-task" to "provider-selected san
|
|||
|
||||
1. Rust binaries/backend removed.
|
||||
2. Existing IPC replaced by new TypeScript transport.
|
||||
3. Configuration schema changes for workspace selection and sandbox provider defaults.
|
||||
4. Runtime model changes from global control plane to workspace coordinator actor.
|
||||
5. Database schema migrates to workspace + provider + sandbox identity model.
|
||||
6. Command options evolve to include workspace and provider selection.
|
||||
3. Configuration schema changes for organization selection and sandbox provider defaults.
|
||||
4. Runtime model changes from global control plane to organization coordinator actor.
|
||||
5. Database schema migrates to organization + provider + sandbox identity model.
|
||||
6. Command options evolve to include organization and provider selection.
|
||||
|
||||
## Monorepo and Build Tooling
|
||||
|
||||
|
|
@ -49,7 +49,7 @@ Root tooling is standardized:
|
|||
|
||||
- `pnpm-workspace.yaml`
|
||||
- `turbo.json`
|
||||
- workspace scripts through `pnpm` + `turbo run ...`
|
||||
- organization scripts through `pnpm` + `turbo run ...`
|
||||
|
||||
Target package layout:
|
||||
|
||||
|
|
@ -59,13 +59,13 @@ packages/
|
|||
backend/
|
||||
src/
|
||||
actors/
|
||||
workspace.ts
|
||||
project.ts
|
||||
organization.ts
|
||||
repository.ts
|
||||
task.ts
|
||||
sandbox-instance.ts
|
||||
history.ts
|
||||
project-pr-sync.ts
|
||||
project-branch-sync.ts
|
||||
repository-pr-sync.ts
|
||||
repository-branch-sync.ts
|
||||
task-status-sync.ts
|
||||
keys.ts
|
||||
events.ts
|
||||
|
|
@ -88,13 +88,13 @@ packages/
|
|||
server.ts
|
||||
types.ts
|
||||
config/
|
||||
workspace.ts
|
||||
organization.ts
|
||||
backend.ts
|
||||
cli/ # hf command surface
|
||||
src/
|
||||
commands/
|
||||
client/ # backend transport client
|
||||
workspace/ # workspace selection resolver
|
||||
organization/ # organization selection resolver
|
||||
tui/ # OpenTUI app
|
||||
src/
|
||||
app/
|
||||
|
|
@ -111,13 +111,13 @@ CLI and TUI are separate packages in the same monorepo, not separate repositorie
|
|||
|
||||
Backend actor files and responsibilities:
|
||||
|
||||
1. `packages/backend/src/actors/workspace.ts`
|
||||
- `WorkspaceActor` implementation.
|
||||
- Provider profile resolution and workspace-level coordination.
|
||||
- Spawns/routes to `ProjectActor` handles.
|
||||
1. `packages/backend/src/actors/organization.ts`
|
||||
- `OrganizationActor` implementation.
|
||||
- Provider profile resolution and organization-level coordination.
|
||||
- Spawns/routes to `RepositoryActor` handles.
|
||||
|
||||
2. `packages/backend/src/actors/project.ts`
|
||||
- `ProjectActor` implementation.
|
||||
2. `packages/backend/src/actors/repository.ts`
|
||||
- `RepositoryActor` implementation.
|
||||
- Branch snapshot refresh, PR cache orchestration, stream publication.
|
||||
- Routes task actions to `TaskActor`.
|
||||
|
||||
|
|
@ -134,7 +134,7 @@ Backend actor files and responsibilities:
|
|||
- Writes workflow events to SQLite via Drizzle.
|
||||
|
||||
6. `packages/backend/src/actors/keys.ts`
|
||||
- Workspace-prefixed actor key builders/parsers.
|
||||
- Organization-prefixed actor key builders/parsers.
|
||||
|
||||
7. `packages/backend/src/actors/events.ts`
|
||||
- Internal actor event envelopes and stream payload types.
|
||||
|
|
@ -145,13 +145,13 @@ Backend actor files and responsibilities:
|
|||
9. `packages/backend/src/actors/index.ts`
|
||||
- Actor exports and composition wiring.
|
||||
|
||||
10. `packages/backend/src/actors/project-pr-sync.ts`
|
||||
10. `packages/backend/src/actors/repository-pr-sync.ts`
|
||||
- Read-only PR polling loop (single timeout cadence).
|
||||
- Sends sync results back to `ProjectActor`.
|
||||
- Sends sync results back to `RepositoryActor`.
|
||||
|
||||
11. `packages/backend/src/actors/project-branch-sync.ts`
|
||||
11. `packages/backend/src/actors/repository-branch-sync.ts`
|
||||
- Read-only branch snapshot polling loop (single timeout cadence).
|
||||
- Sends sync results back to `ProjectActor`.
|
||||
- Sends sync results back to `RepositoryActor`.
|
||||
|
||||
12. `packages/backend/src/actors/task-status-sync.ts`
|
||||
- Read-only session/sandbox status polling loop (single timeout cadence).
|
||||
|
|
@ -169,17 +169,17 @@ pnpm build -F rivetkit
|
|||
2. Consume via local `link:` dependencies to built artifacts.
|
||||
3. Keep dependency wiring deterministic and documented in repo scripts.
|
||||
|
||||
## Workspace Model
|
||||
## Organization Model
|
||||
|
||||
Every command executes against a resolved workspace context.
|
||||
Every command executes against a resolved organization context.
|
||||
|
||||
Workspace selection:
|
||||
Organization selection:
|
||||
|
||||
1. CLI flag: `--workspace <name>`
|
||||
2. Config default workspace
|
||||
1. CLI flag: `--organization <name>`
|
||||
2. Config default organization
|
||||
3. Fallback to `default`
|
||||
|
||||
Workspace controls:
|
||||
Organization controls:
|
||||
|
||||
1. provider profile defaults
|
||||
2. sandbox policy
|
||||
|
|
@ -188,45 +188,45 @@ Workspace controls:
|
|||
|
||||
## New Actor Implementation Overview
|
||||
|
||||
RivetKit registry actor keys are workspace-prefixed:
|
||||
RivetKit registry actor keys are organization-prefixed:
|
||||
|
||||
1. `WorkspaceActor` (workspace coordinator)
|
||||
- Key: `["ws", workspaceId]`
|
||||
- Owns workspace config/runtime coordination, provider registry, workspace health.
|
||||
- Resolves provider defaults and workspace-level policies.
|
||||
1. `OrganizationActor` (organization coordinator)
|
||||
- Key: `["ws", organizationId]`
|
||||
- Owns organization config/runtime coordination, provider registry, organization health.
|
||||
- Resolves provider defaults and organization-level policies.
|
||||
|
||||
2. `ProjectActor`
|
||||
- Key: `["ws", workspaceId, "project", repoId]`
|
||||
2. `RepositoryActor`
|
||||
- Key: `["ws", organizationId, "repository", repoId]`
|
||||
- Owns repo snapshot cache and PR cache refresh orchestration.
|
||||
- Routes branch/task commands to task actors.
|
||||
- Streams project updates to CLI/TUI subscribers.
|
||||
- Streams repository updates to CLI/TUI subscribers.
|
||||
|
||||
3. `TaskActor`
|
||||
- Key: `["ws", workspaceId, "project", repoId, "task", taskId]`
|
||||
- Key: `["ws", organizationId, "repository", repoId, "task", taskId]`
|
||||
- Owns task metadata/runtime state.
|
||||
- Creates/resumes sandbox + session through provider adapter.
|
||||
- Handles attach/push/sync/merge/archive/kill and post-idle automation.
|
||||
|
||||
4. `SandboxInstanceActor` (optional but recommended)
|
||||
- Key: `["ws", workspaceId, "provider", providerId, "sandbox", sandboxId]`
|
||||
- Key: `["ws", organizationId, "provider", providerId, "sandbox", sandboxId]`
|
||||
- Owns sandbox lifecycle, heartbeat, endpoint readiness, recovery.
|
||||
|
||||
5. `HistoryActor`
|
||||
- Key: `["ws", workspaceId, "project", repoId, "history"]`
|
||||
- Key: `["ws", organizationId, "repository", repoId, "history"]`
|
||||
- Owns `events` writes and workflow timeline completeness.
|
||||
|
||||
6. `ProjectPrSyncActor` (child poller)
|
||||
- Key: `["ws", workspaceId, "project", repoId, "pr-sync"]`
|
||||
- Polls PR state on interval and emits results to `ProjectActor`.
|
||||
- Key: `["ws", organizationId, "repository", repoId, "pr-sync"]`
|
||||
- Polls PR state on interval and emits results to `RepositoryActor`.
|
||||
- Does not write DB directly.
|
||||
|
||||
7. `ProjectBranchSyncActor` (child poller)
|
||||
- Key: `["ws", workspaceId, "project", repoId, "branch-sync"]`
|
||||
- Polls branch/worktree state on interval and emits results to `ProjectActor`.
|
||||
- Key: `["ws", organizationId, "repository", repoId, "branch-sync"]`
|
||||
- Polls branch/worktree state on interval and emits results to `RepositoryActor`.
|
||||
- Does not write DB directly.
|
||||
|
||||
8. `TaskStatusSyncActor` (child poller)
|
||||
- Key: `["ws", workspaceId, "project", repoId, "task", taskId, "status-sync"]`
|
||||
- Key: `["ws", organizationId, "repository", repoId, "task", taskId, "status-sync"]`
|
||||
- Polls agent/session/sandbox health on interval and emits results to `TaskActor`.
|
||||
- Does not write DB directly.
|
||||
|
||||
|
|
@ -236,10 +236,10 @@ Ownership rule: each table/row has one actor writer.
|
|||
|
||||
Always define actor run-loop + mutated state together:
|
||||
|
||||
1. `WorkspaceActor`
|
||||
- Mutates: `workspaces`, `workspace_provider_profiles`.
|
||||
1. `OrganizationActor`
|
||||
- Mutates: `organizations`, `workspace_provider_profiles`.
|
||||
|
||||
2. `ProjectActor`
|
||||
2. `RepositoryActor`
|
||||
- Mutates: `repos`, `branches`, `pr_cache` (applies child poller results).
|
||||
|
||||
3. `TaskActor`
|
||||
|
|
@ -251,30 +251,30 @@ Always define actor run-loop + mutated state together:
|
|||
5. `HistoryActor`
|
||||
- Mutates: `events`.
|
||||
|
||||
6. Child sync actors (`project-pr-sync`, `project-branch-sync`, `task-status-sync`)
|
||||
6. Child sync actors (`repository-pr-sync`, `repository-branch-sync`, `task-status-sync`)
|
||||
- Mutates: none (read-only pollers; publish result messages only).
|
||||
|
||||
## Run Loop Patterns (Required)
|
||||
|
||||
Parent orchestration actors: no timeout, command-only queue loops.
|
||||
|
||||
### `WorkspaceActor` (no timeout)
|
||||
### `OrganizationActor` (no timeout)
|
||||
|
||||
```ts
|
||||
run: async (c) => {
|
||||
while (true) {
|
||||
const msg = await c.queue.next("workspace.command");
|
||||
await handleWorkspaceCommand(c, msg); // writes workspace-owned tables only
|
||||
const msg = await c.queue.next("organization.command");
|
||||
await handleOrganizationCommand(c, msg); // writes organization-owned tables only
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### `ProjectActor` (no timeout)
|
||||
### `RepositoryActor` (no timeout)
|
||||
|
||||
```ts
|
||||
run: async (c) => {
|
||||
while (true) {
|
||||
const msg = await c.queue.next("project.command");
|
||||
const msg = await c.queue.next("repository.command");
|
||||
await handleProjectCommand(c, msg); // includes applying sync results to branches/pr_cache
|
||||
}
|
||||
};
|
||||
|
|
@ -321,10 +321,10 @@ Child sync actors: one timeout each, one cadence each.
|
|||
run: async (c) => {
|
||||
const intervalMs = 30_000;
|
||||
while (true) {
|
||||
const msg = await c.queue.next("project.pr_sync.command", { timeout: intervalMs });
|
||||
const msg = await c.queue.next("repository.pr_sync.command", { timeout: intervalMs });
|
||||
if (!msg) {
|
||||
const result = await pollPrState();
|
||||
await sendToProject({ name: "project.pr_sync.result", result });
|
||||
await sendToProject({ name: "repository.pr_sync.result", result });
|
||||
continue;
|
||||
}
|
||||
await handlePrSyncControl(c, msg); // force/stop/update-interval
|
||||
|
|
@ -338,10 +338,10 @@ run: async (c) => {
|
|||
run: async (c) => {
|
||||
const intervalMs = 5_000;
|
||||
while (true) {
|
||||
const msg = await c.queue.next("project.branch_sync.command", { timeout: intervalMs });
|
||||
const msg = await c.queue.next("repository.branch_sync.command", { timeout: intervalMs });
|
||||
if (!msg) {
|
||||
const result = await pollBranchState();
|
||||
await sendToProject({ name: "project.branch_sync.result", result });
|
||||
await sendToProject({ name: "repository.branch_sync.result", result });
|
||||
continue;
|
||||
}
|
||||
await handleBranchSyncControl(c, msg);
|
||||
|
|
@ -368,7 +368,7 @@ run: async (c) => {
|
|||
|
||||
## Sandbox Provider Interface
|
||||
|
||||
Provider contract lives under `packages/backend/src/providers/provider-api` and is consumed by workspace/project/task actors.
|
||||
Provider contract lives under `packages/backend/src/providers/provider-api` and is consumed by organization/repository/task actors.
|
||||
|
||||
```ts
|
||||
interface SandboxProvider {
|
||||
|
|
@ -398,26 +398,26 @@ Initial providers:
|
|||
- Boots/ensures Sandbox Agent inside sandbox.
|
||||
- Returns endpoint/token for session operations.
|
||||
|
||||
## Command Surface (Workspace + Provider Aware)
|
||||
## Command Surface (Organization + Provider Aware)
|
||||
|
||||
1. `hf create ... --workspace <ws> --provider <worktree|daytona>`
|
||||
2. `hf switch --workspace <ws> [target]`
|
||||
3. `hf attach --workspace <ws> [task]`
|
||||
4. `hf list --workspace <ws>`
|
||||
5. `hf kill|archive|merge|push|sync --workspace <ws> ...`
|
||||
6. `hf workspace use <ws>` to set default workspace
|
||||
1. `hf create ... --organization <ws> --provider <worktree|daytona>`
|
||||
2. `hf switch --organization <ws> [target]`
|
||||
3. `hf attach --organization <ws> [task]`
|
||||
4. `hf list --organization <ws>`
|
||||
5. `hf kill|archive|merge|push|sync --organization <ws> ...`
|
||||
6. `hf organization use <ws>` to set default organization
|
||||
|
||||
List/TUI include provider and sandbox health metadata.
|
||||
|
||||
`--workspace` remains optional; omitted values use the standard resolution order.
|
||||
`--organization` remains optional; omitted values use the standard resolution order.
|
||||
|
||||
## Data Model v2 (SQLite + Drizzle)
|
||||
|
||||
All persistent state is SQLite via Drizzle schema + migrations.
|
||||
|
||||
Tables (workspace-scoped):
|
||||
Tables (organization-scoped):
|
||||
|
||||
1. `workspaces`
|
||||
1. `organizations`
|
||||
2. `workspace_provider_profiles`
|
||||
3. `repos` (`workspace_id`, `repo_id`, ...)
|
||||
4. `branches` (`workspace_id`, `repo_id`, ...)
|
||||
|
|
@ -433,10 +433,10 @@ Migration approach: one-way migration from existing schema during TS backend boo
|
|||
|
||||
1. TypeScript backend exposes local control API (socket or localhost HTTP).
|
||||
2. CLI/TUI are thin clients; all mutations go through backend actors.
|
||||
3. OpenTUI subscribes to project streams from workspace-scoped project actors.
|
||||
4. Workspace is required context on all backend mutation requests.
|
||||
3. OpenTUI subscribes to repository streams from organization-scoped repository actors.
|
||||
4. Organization is required context on all backend mutation requests.
|
||||
|
||||
CLI/TUI are responsible for resolving workspace context before calling backend mutations.
|
||||
CLI/TUI are responsible for resolving organization context before calling backend mutations.
|
||||
|
||||
## CLI + TUI Packaging
|
||||
|
||||
|
|
@ -451,10 +451,10 @@ The package still calls the same backend API and shares contracts from `packages
|
|||
|
||||
## Implementation Phases
|
||||
|
||||
## Phase 0: Contracts and Workspace Spec
|
||||
## Phase 0: Contracts and Organization Spec
|
||||
|
||||
1. Freeze workspace model, provider contract, and actor ownership map.
|
||||
2. Freeze command flags for workspace + provider selection.
|
||||
1. Freeze organization model, provider contract, and actor ownership map.
|
||||
2. Freeze command flags for organization + provider selection.
|
||||
3. Define Drizzle schema draft and migration plan.
|
||||
|
||||
Exit criteria:
|
||||
|
|
@ -462,7 +462,7 @@ Exit criteria:
|
|||
|
||||
## Phase 1: TypeScript Monorepo Bootstrap
|
||||
|
||||
1. Add `pnpm` workspace + Turborepo pipeline.
|
||||
1. Add `pnpm` organization + Turborepo pipeline.
|
||||
2. Create `shared`, `backend`, and `cli` packages (with TUI integrated into CLI).
|
||||
3. Add strict TypeScript config and CI checks.
|
||||
|
||||
|
|
@ -473,10 +473,10 @@ Exit criteria:
|
|||
|
||||
1. Wire local RivetKit dependency from `../rivet`.
|
||||
2. Add SQLite + Drizzle migrations and query layer.
|
||||
3. Implement actor registry with workspace-prefixed keys.
|
||||
3. Implement actor registry with organization-prefixed keys.
|
||||
|
||||
Exit criteria:
|
||||
- Backend boot + workspace actor health checks pass.
|
||||
- Backend boot + organization actor health checks pass.
|
||||
|
||||
## Phase 3: Provider Layer in Backend
|
||||
|
||||
|
|
@ -487,9 +487,9 @@ Exit criteria:
|
|||
Exit criteria:
|
||||
- `create/list/switch/attach/push/sync/kill` pass on worktree provider.
|
||||
|
||||
## Phase 4: Workspace/Task Lifecycle
|
||||
## Phase 4: Organization/Task Lifecycle
|
||||
|
||||
1. Implement workspace coordinator flows.
|
||||
1. Implement organization coordinator flows.
|
||||
2. Implement TaskActor full lifecycle + post-idle automation.
|
||||
3. Implement history events and PR/CI/review change tracking.
|
||||
|
||||
|
|
@ -509,7 +509,7 @@ Exit criteria:
|
|||
|
||||
1. Build interactive list/switch UI in OpenTUI.
|
||||
2. Implement key actions (attach/open PR/archive/merge/sync).
|
||||
3. Add workspace switcher UX and provider/sandbox indicators.
|
||||
3. Add organization switcher UX and provider/sandbox indicators.
|
||||
|
||||
Exit criteria:
|
||||
- TUI parity and responsive streaming updates.
|
||||
|
|
@ -534,7 +534,7 @@ Exit criteria:
|
|||
|
||||
2. Integration tests
|
||||
- backend + sqlite + provider fakes
|
||||
- workspace isolation boundaries
|
||||
- organization isolation boundaries
|
||||
- session recovery and restart handling
|
||||
|
||||
3. E2E tests
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue