Rename Foundry handoffs to tasks (#239)

* Restore foundry onboarding stack

* Consolidate foundry rename

* Create foundry tasks without prompts

* Rename Foundry handoffs to tasks
This commit is contained in:
Nathan Flurry 2026-03-11 13:23:54 -07:00 committed by GitHub
parent d30cc0bcc8
commit d75e8c31d1
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
281 changed files with 9242 additions and 4356 deletions

View file

@ -17,7 +17,7 @@ coverage/
# Environment
.env
.env.*
.openhandoff/
.foundry/
# IDE
.idea/

27
.env.development.example Normal file
View file

@ -0,0 +1,27 @@
# Load this file only when NODE_ENV=development.
# The backend does not load dotenv files in production.
APP_URL=http://localhost:4173
BETTER_AUTH_URL=http://localhost:4173
BETTER_AUTH_SECRET=sandbox-agent-foundry-development-only-change-me
GITHUB_REDIRECT_URI=http://localhost:4173/api/rivet/app/auth/github/callback
# Fill these in when enabling live GitHub OAuth.
GITHUB_CLIENT_ID=
GITHUB_CLIENT_SECRET=
# Fill these in when enabling GitHub App-backed org installation and repo import.
GITHUB_APP_ID=
GITHUB_APP_CLIENT_ID=
GITHUB_APP_CLIENT_SECRET=
# Store PEM material as a quoted single-line value with \n escapes.
GITHUB_APP_PRIVATE_KEY=
# Webhook secret for verifying GitHub webhook payloads.
# Use smee.io for local development: https://smee.io/new
GITHUB_WEBHOOK_SECRET=
# Fill these in when enabling live Stripe billing.
STRIPE_SECRET_KEY=
STRIPE_PUBLISHABLE_KEY=
STRIPE_WEBHOOK_SECRET=
STRIPE_PRICE_TEAM=

2
.gitignore vendored
View file

@ -51,7 +51,7 @@ Cargo.lock
# Example temp files
.tmp-upload/
*.db
.openhandoff/
.foundry/
# CLI binaries (downloaded during npm publish)
sdks/cli/platforms/*/bin/

View file

@ -0,0 +1,153 @@
---
title: "Foundry Self-Hosting"
description: "Environment, credentials, and deployment setup for Sandbox Agent Foundry auth, GitHub, and billing."
---
This guide documents the deployment contract for the Foundry product surface: app auth, GitHub onboarding, repository import, and billing.
It also covers the local-development bootstrap that uses `.env.development` only when `NODE_ENV=development`.
## Local Development
For backend local development, the Foundry backend now supports a development-only dotenv bootstrap:
- It loads `.env.development.local` and `.env.development`
- It does this **only** when `NODE_ENV=development`
- It does **not** load dotenv files in production
The example file lives at [`/.env.development.example`](https://github.com/rivet-dev/sandbox-agent/blob/main/.env.development.example).
To use it locally:
```bash
cp .env.development.example .env.development
```
Run the backend with:
```bash
just foundry-backend-start
```
That recipe sets `NODE_ENV=development`, which enables the dotenv loader.
### Local Defaults
These values can be safely defaulted for local development:
- `APP_URL=http://localhost:4173`
- `BETTER_AUTH_URL=http://localhost:4173`
- `BETTER_AUTH_SECRET=sandbox-agent-foundry-development-only-change-me`
- `GITHUB_REDIRECT_URI=http://localhost:4173/api/rivet/app/auth/github/callback`
These should be treated as development-only values.
## Production Environment
For production or self-hosting, set these as real environment variables in your deployment platform. Do not rely on dotenv file loading.
### App/Auth
| Variable | Required | Notes |
|---|---:|---|
| `APP_URL` | Yes | Public frontend origin |
| `BETTER_AUTH_URL` | Yes | Public auth base URL |
| `BETTER_AUTH_SECRET` | Yes | Strong random secret for auth/session signing |
### GitHub OAuth
| Variable | Required | Notes |
|---|---:|---|
| `GITHUB_CLIENT_ID` | Yes | GitHub OAuth app client id |
| `GITHUB_CLIENT_SECRET` | Yes | GitHub OAuth app client secret |
| `GITHUB_REDIRECT_URI` | Yes | GitHub OAuth callback URL |
Use GitHub OAuth for:
- user sign-in
- user identity
- org selection
- access to the signed-in users GitHub context
## GitHub App
If your Foundry deployment uses GitHub App-backed organization install and repo import, also configure:
| Variable | Required | Notes |
|---|---:|---|
| `GITHUB_APP_ID` | Yes | GitHub App id |
| `GITHUB_APP_CLIENT_ID` | Yes | GitHub App client id |
| `GITHUB_APP_CLIENT_SECRET` | Yes | GitHub App client secret |
| `GITHUB_APP_PRIVATE_KEY` | Yes | PEM private key for installation auth |
For `.env.development` and `.env.development.local`, store `GITHUB_APP_PRIVATE_KEY` as a quoted single-line value with `\n` escapes instead of raw multi-line PEM text.
Recommended GitHub App permissions:
- Repository `Metadata: Read`
- Repository `Contents: Read & Write`
- Repository `Pull requests: Read & Write`
- Repository `Checks: Read`
- Repository `Commit statuses: Read`
Set the webhook URL to `https://<your-backend-host>/api/rivet/app/webhooks/github` and generate a webhook secret. Store the secret as `GITHUB_WEBHOOK_SECRET`.
Recommended webhook subscriptions:
- `installation`
- `installation_repositories`
- `pull_request`
- `pull_request_review`
- `pull_request_review_comment`
- `push`
- `create`
- `delete`
- `check_suite`
- `check_run`
- `status`
Use the GitHub App for:
- installation/reconnect state
- org repo import
- repository sync
- PR creation and updates
Use GitHub OAuth for:
- who the user is
- which orgs they can choose
## Stripe
For live billing, configure:
| Variable | Required | Notes |
|---|---:|---|
| `STRIPE_SECRET_KEY` | Yes | Server-side Stripe secret key |
| `STRIPE_PUBLISHABLE_KEY` | Yes | Client-side Stripe publishable key |
| `STRIPE_WEBHOOK_SECRET` | Yes | Signing secret for billing webhooks |
| `STRIPE_PRICE_TEAM` | Yes | Stripe price id for the Team plan checkout session |
Stripe should own:
- hosted checkout
- billing portal
- subscription status
- invoice history
- webhook-driven state sync
## Mock Invariant
Foundrys mock client path should continue to work end to end even when the real auth/GitHub/Stripe path exists.
That includes:
- sign-in
- org selection/import
- settings
- billing UI
- workspace/task/session flow
- seat accrual
Use mock mode for deterministic UI review and local product development. Use the real env-backed path for integration and self-hosting.

View file

@ -1,90 +0,0 @@
name: openhandoff
services:
backend:
build:
context: ..
dockerfile: factory/docker/backend.dev.Dockerfile
image: openhandoff-backend-dev
working_dir: /app
environment:
HF_BACKEND_HOST: "0.0.0.0"
HF_BACKEND_PORT: "7741"
HF_RIVET_MANAGER_PORT: "8750"
RIVETKIT_STORAGE_PATH: "/root/.local/share/openhandoff/rivetkit"
# Pass through credentials needed for agent execution + PR creation in dev/e2e.
# Do not hardcode secrets; set these in your environment when starting compose.
ANTHROPIC_API_KEY: "${ANTHROPIC_API_KEY:-}"
CLAUDE_API_KEY: "${CLAUDE_API_KEY:-${ANTHROPIC_API_KEY:-}}"
OPENAI_API_KEY: "${OPENAI_API_KEY:-}"
# sandbox-agent codex plugin currently expects CODEX_API_KEY. Map from OPENAI_API_KEY for convenience.
CODEX_API_KEY: "${CODEX_API_KEY:-${OPENAI_API_KEY:-}}"
# Support either GITHUB_TOKEN or GITHUB_PAT in local env files.
GITHUB_TOKEN: "${GITHUB_TOKEN:-${GITHUB_PAT:-}}"
GH_TOKEN: "${GH_TOKEN:-${GITHUB_TOKEN:-${GITHUB_PAT:-}}}"
DAYTONA_ENDPOINT: "${DAYTONA_ENDPOINT:-}"
DAYTONA_API_KEY: "${DAYTONA_API_KEY:-}"
HF_DAYTONA_ENDPOINT: "${HF_DAYTONA_ENDPOINT:-}"
HF_DAYTONA_API_KEY: "${HF_DAYTONA_API_KEY:-}"
ports:
- "7741:7741"
# RivetKit manager (used by browser clients after /api/rivet metadata redirect in dev)
- "8750:8750"
volumes:
- "..:/app"
# The linked RivetKit checkout resolves from factory packages to /handoff/rivet-checkout in-container.
- "../../../handoff/rivet-checkout:/handoff/rivet-checkout:ro"
# Reuse the host Codex auth profile for local sandbox-agent Codex sessions in dev.
- "${HOME}/.codex:/root/.codex"
# Keep backend dependency installs Linux-native instead of using host node_modules.
- "openhandoff_backend_root_node_modules:/app/node_modules"
- "openhandoff_backend_backend_node_modules:/app/factory/packages/backend/node_modules"
- "openhandoff_backend_shared_node_modules:/app/factory/packages/shared/node_modules"
- "openhandoff_backend_persist_rivet_node_modules:/app/sdks/persist-rivet/node_modules"
- "openhandoff_backend_typescript_node_modules:/app/sdks/typescript/node_modules"
- "openhandoff_backend_pnpm_store:/root/.local/share/pnpm/store"
# Persist backend-managed local git clones across container restarts.
- "openhandoff_git_repos:/root/.local/share/openhandoff/repos"
# Persist RivetKit local storage across container restarts.
- "openhandoff_rivetkit_storage:/root/.local/share/openhandoff/rivetkit"
frontend:
build:
context: ..
dockerfile: factory/docker/frontend.dev.Dockerfile
working_dir: /app
depends_on:
- backend
environment:
HOME: "/tmp"
HF_BACKEND_HTTP: "http://backend:7741"
ports:
- "4173:4173"
volumes:
- "..:/app"
# Ensure logs in .openhandoff/ persist on the host even if we change source mounts later.
- "./.openhandoff:/app/factory/.openhandoff"
- "../../../handoff/rivet-checkout:/handoff/rivet-checkout:ro"
# Use Linux-native workspace dependencies inside the container instead of host node_modules.
- "openhandoff_node_modules:/app/node_modules"
- "openhandoff_client_node_modules:/app/factory/packages/client/node_modules"
- "openhandoff_frontend_errors_node_modules:/app/factory/packages/frontend-errors/node_modules"
- "openhandoff_frontend_node_modules:/app/factory/packages/frontend/node_modules"
- "openhandoff_shared_node_modules:/app/factory/packages/shared/node_modules"
- "openhandoff_pnpm_store:/tmp/.local/share/pnpm/store"
volumes:
openhandoff_backend_root_node_modules: {}
openhandoff_backend_backend_node_modules: {}
openhandoff_backend_shared_node_modules: {}
openhandoff_backend_persist_rivet_node_modules: {}
openhandoff_backend_typescript_node_modules: {}
openhandoff_backend_pnpm_store: {}
openhandoff_git_repos: {}
openhandoff_rivetkit_storage: {}
openhandoff_node_modules: {}
openhandoff_client_node_modules: {}
openhandoff_frontend_errors_node_modules: {}
openhandoff_frontend_node_modules: {}
openhandoff_shared_node_modules: {}
openhandoff_pnpm_store: {}

View file

@ -1,10 +0,0 @@
import { actorSqliteDb } from "../../../db/actor-sqlite.js";
import * as schema from "./schema.js";
import migrations from "./migrations.js";
export const handoffDb = actorSqliteDb({
actorName: "handoff",
schema,
migrations,
migrationsFolderUrl: new URL("./drizzle/", import.meta.url),
});

View file

@ -1,6 +0,0 @@
import { defineConfig } from "rivetkit/db/drizzle";
export default defineConfig({
out: "./src/actors/handoff/db/drizzle",
schema: "./src/actors/handoff/db/schema.ts",
});

View file

@ -1,3 +0,0 @@
ALTER TABLE `handoff` DROP COLUMN `auto_committed`;--> statement-breakpoint
ALTER TABLE `handoff` DROP COLUMN `pushed`;--> statement-breakpoint
ALTER TABLE `handoff` DROP COLUMN `needs_push`;

View file

@ -1 +0,0 @@
ALTER TABLE `handoff_sandboxes` ADD `sandbox_actor_id` text;

View file

@ -1,389 +0,0 @@
import { actor, queue } from "rivetkit";
import { workflow } from "rivetkit/workflow";
import type {
AgentType,
HandoffRecord,
HandoffWorkbenchChangeModelInput,
HandoffWorkbenchRenameInput,
HandoffWorkbenchRenameSessionInput,
HandoffWorkbenchSetSessionUnreadInput,
HandoffWorkbenchSendMessageInput,
HandoffWorkbenchUpdateDraftInput,
ProviderId,
} from "@openhandoff/shared";
import { expectQueueResponse } from "../../services/queue.js";
import { selfHandoff } from "../handles.js";
import { handoffDb } from "./db/db.js";
import { getCurrentRecord } from "./workflow/common.js";
import {
changeWorkbenchModel,
closeWorkbenchSession,
createWorkbenchSession,
getWorkbenchHandoff,
markWorkbenchUnread,
publishWorkbenchPr,
renameWorkbenchBranch,
renameWorkbenchHandoff,
renameWorkbenchSession,
revertWorkbenchFile,
sendWorkbenchMessage,
syncWorkbenchSessionStatus,
setWorkbenchSessionUnread,
stopWorkbenchSession,
updateWorkbenchDraft,
} from "./workbench.js";
import { HANDOFF_QUEUE_NAMES, handoffWorkflowQueueName, runHandoffWorkflow } from "./workflow/index.js";
export interface HandoffInput {
workspaceId: string;
repoId: string;
handoffId: string;
repoRemote: string;
repoLocalPath: string;
branchName: string | null;
title: string | null;
task: string;
providerId: ProviderId;
agentType: AgentType | null;
explicitTitle: string | null;
explicitBranchName: string | null;
initialPrompt: string | null;
}
interface InitializeCommand {
providerId?: ProviderId;
}
interface HandoffActionCommand {
reason?: string;
}
interface HandoffTabCommand {
tabId: string;
}
interface HandoffStatusSyncCommand {
sessionId: string;
status: "running" | "idle" | "error";
at: number;
}
interface HandoffWorkbenchValueCommand {
value: string;
}
interface HandoffWorkbenchSessionTitleCommand {
sessionId: string;
title: string;
}
interface HandoffWorkbenchSessionUnreadCommand {
sessionId: string;
unread: boolean;
}
interface HandoffWorkbenchUpdateDraftCommand {
sessionId: string;
text: string;
attachments: Array<any>;
}
interface HandoffWorkbenchChangeModelCommand {
sessionId: string;
model: string;
}
interface HandoffWorkbenchSendMessageCommand {
sessionId: string;
text: string;
attachments: Array<any>;
}
interface HandoffWorkbenchCreateSessionCommand {
model?: string;
}
interface HandoffWorkbenchSessionCommand {
sessionId: string;
}
export const handoff = actor({
db: handoffDb,
queues: Object.fromEntries(HANDOFF_QUEUE_NAMES.map((name) => [name, queue()])),
options: {
actionTimeout: 5 * 60_000,
},
createState: (_c, input: HandoffInput) => ({
workspaceId: input.workspaceId,
repoId: input.repoId,
handoffId: input.handoffId,
repoRemote: input.repoRemote,
repoLocalPath: input.repoLocalPath,
branchName: input.branchName,
title: input.title,
task: input.task,
providerId: input.providerId,
agentType: input.agentType,
explicitTitle: input.explicitTitle,
explicitBranchName: input.explicitBranchName,
initialPrompt: input.initialPrompt,
initialized: false,
previousStatus: null as string | null,
}),
actions: {
async initialize(c, cmd: InitializeCommand): Promise<HandoffRecord> {
const self = selfHandoff(c);
const result = await self.send(handoffWorkflowQueueName("handoff.command.initialize"), cmd ?? {}, {
wait: true,
timeout: 60_000,
});
return expectQueueResponse<HandoffRecord>(result);
},
async provision(c, cmd: InitializeCommand): Promise<{ ok: true }> {
const self = selfHandoff(c);
await self.send(handoffWorkflowQueueName("handoff.command.provision"), cmd ?? {}, {
wait: true,
timeout: 30 * 60_000,
});
return { ok: true };
},
async attach(c, cmd?: HandoffActionCommand): Promise<{ target: string; sessionId: string | null }> {
const self = selfHandoff(c);
const result = await self.send(handoffWorkflowQueueName("handoff.command.attach"), cmd ?? {}, {
wait: true,
timeout: 20_000,
});
return expectQueueResponse<{ target: string; sessionId: string | null }>(result);
},
async switch(c): Promise<{ switchTarget: string }> {
const self = selfHandoff(c);
const result = await self.send(
handoffWorkflowQueueName("handoff.command.switch"),
{},
{
wait: true,
timeout: 20_000,
},
);
return expectQueueResponse<{ switchTarget: string }>(result);
},
async push(c, cmd?: HandoffActionCommand): Promise<void> {
const self = selfHandoff(c);
await self.send(handoffWorkflowQueueName("handoff.command.push"), cmd ?? {}, {
wait: true,
timeout: 180_000,
});
},
async sync(c, cmd?: HandoffActionCommand): Promise<void> {
const self = selfHandoff(c);
await self.send(handoffWorkflowQueueName("handoff.command.sync"), cmd ?? {}, {
wait: true,
timeout: 30_000,
});
},
async merge(c, cmd?: HandoffActionCommand): Promise<void> {
const self = selfHandoff(c);
await self.send(handoffWorkflowQueueName("handoff.command.merge"), cmd ?? {}, {
wait: true,
timeout: 30_000,
});
},
async archive(c, cmd?: HandoffActionCommand): Promise<void> {
const self = selfHandoff(c);
void self
.send(handoffWorkflowQueueName("handoff.command.archive"), cmd ?? {}, {
wait: true,
timeout: 60_000,
})
.catch((error: unknown) => {
c.log.warn({
msg: "archive command failed",
error: error instanceof Error ? error.message : String(error),
});
});
},
async kill(c, cmd?: HandoffActionCommand): Promise<void> {
const self = selfHandoff(c);
await self.send(handoffWorkflowQueueName("handoff.command.kill"), cmd ?? {}, {
wait: true,
timeout: 60_000,
});
},
async get(c): Promise<HandoffRecord> {
return await getCurrentRecord({ db: c.db, state: c.state });
},
async getWorkbench(c) {
return await getWorkbenchHandoff(c);
},
async markWorkbenchUnread(c): Promise<void> {
const self = selfHandoff(c);
await self.send(
handoffWorkflowQueueName("handoff.command.workbench.mark_unread"),
{},
{
wait: true,
timeout: 20_000,
},
);
},
async renameWorkbenchHandoff(c, input: HandoffWorkbenchRenameInput): Promise<void> {
const self = selfHandoff(c);
await self.send(handoffWorkflowQueueName("handoff.command.workbench.rename_handoff"), { value: input.value } satisfies HandoffWorkbenchValueCommand, {
wait: true,
timeout: 20_000,
});
},
async renameWorkbenchBranch(c, input: HandoffWorkbenchRenameInput): Promise<void> {
const self = selfHandoff(c);
await self.send(handoffWorkflowQueueName("handoff.command.workbench.rename_branch"), { value: input.value } satisfies HandoffWorkbenchValueCommand, {
wait: true,
timeout: 5 * 60_000,
});
},
async createWorkbenchSession(c, input?: { model?: string }): Promise<{ tabId: string }> {
const self = selfHandoff(c);
const result = await self.send(
handoffWorkflowQueueName("handoff.command.workbench.create_session"),
{ ...(input?.model ? { model: input.model } : {}) } satisfies HandoffWorkbenchCreateSessionCommand,
{
wait: true,
timeout: 5 * 60_000,
},
);
return expectQueueResponse<{ tabId: string }>(result);
},
async renameWorkbenchSession(c, input: HandoffWorkbenchRenameSessionInput): Promise<void> {
const self = selfHandoff(c);
await self.send(
handoffWorkflowQueueName("handoff.command.workbench.rename_session"),
{ sessionId: input.tabId, title: input.title } satisfies HandoffWorkbenchSessionTitleCommand,
{
wait: true,
timeout: 20_000,
},
);
},
async setWorkbenchSessionUnread(c, input: HandoffWorkbenchSetSessionUnreadInput): Promise<void> {
const self = selfHandoff(c);
await self.send(
handoffWorkflowQueueName("handoff.command.workbench.set_session_unread"),
{ sessionId: input.tabId, unread: input.unread } satisfies HandoffWorkbenchSessionUnreadCommand,
{
wait: true,
timeout: 20_000,
},
);
},
async updateWorkbenchDraft(c, input: HandoffWorkbenchUpdateDraftInput): Promise<void> {
const self = selfHandoff(c);
await self.send(
handoffWorkflowQueueName("handoff.command.workbench.update_draft"),
{
sessionId: input.tabId,
text: input.text,
attachments: input.attachments,
} satisfies HandoffWorkbenchUpdateDraftCommand,
{
wait: true,
timeout: 20_000,
},
);
},
async changeWorkbenchModel(c, input: HandoffWorkbenchChangeModelInput): Promise<void> {
const self = selfHandoff(c);
await self.send(
handoffWorkflowQueueName("handoff.command.workbench.change_model"),
{ sessionId: input.tabId, model: input.model } satisfies HandoffWorkbenchChangeModelCommand,
{
wait: true,
timeout: 20_000,
},
);
},
async sendWorkbenchMessage(c, input: HandoffWorkbenchSendMessageInput): Promise<void> {
const self = selfHandoff(c);
await self.send(
handoffWorkflowQueueName("handoff.command.workbench.send_message"),
{
sessionId: input.tabId,
text: input.text,
attachments: input.attachments,
} satisfies HandoffWorkbenchSendMessageCommand,
{
wait: true,
timeout: 10 * 60_000,
},
);
},
async stopWorkbenchSession(c, input: HandoffTabCommand): Promise<void> {
const self = selfHandoff(c);
await self.send(handoffWorkflowQueueName("handoff.command.workbench.stop_session"), { sessionId: input.tabId } satisfies HandoffWorkbenchSessionCommand, {
wait: true,
timeout: 5 * 60_000,
});
},
async syncWorkbenchSessionStatus(c, input: HandoffStatusSyncCommand): Promise<void> {
const self = selfHandoff(c);
await self.send(handoffWorkflowQueueName("handoff.command.workbench.sync_session_status"), input, {
wait: true,
timeout: 20_000,
});
},
async closeWorkbenchSession(c, input: HandoffTabCommand): Promise<void> {
const self = selfHandoff(c);
await self.send(
handoffWorkflowQueueName("handoff.command.workbench.close_session"),
{ sessionId: input.tabId } satisfies HandoffWorkbenchSessionCommand,
{
wait: true,
timeout: 5 * 60_000,
},
);
},
async publishWorkbenchPr(c): Promise<void> {
const self = selfHandoff(c);
await self.send(
handoffWorkflowQueueName("handoff.command.workbench.publish_pr"),
{},
{
wait: true,
timeout: 10 * 60_000,
},
);
},
async revertWorkbenchFile(c, input: { path: string }): Promise<void> {
const self = selfHandoff(c);
await self.send(handoffWorkflowQueueName("handoff.command.workbench.revert_file"), input, {
wait: true,
timeout: 5 * 60_000,
});
},
},
run: workflow(runHandoffWorkflow),
});
export { HANDOFF_QUEUE_NAMES };

View file

@ -1,31 +0,0 @@
export const HANDOFF_QUEUE_NAMES = [
"handoff.command.initialize",
"handoff.command.provision",
"handoff.command.attach",
"handoff.command.switch",
"handoff.command.push",
"handoff.command.sync",
"handoff.command.merge",
"handoff.command.archive",
"handoff.command.kill",
"handoff.command.get",
"handoff.command.workbench.mark_unread",
"handoff.command.workbench.rename_handoff",
"handoff.command.workbench.rename_branch",
"handoff.command.workbench.create_session",
"handoff.command.workbench.rename_session",
"handoff.command.workbench.set_session_unread",
"handoff.command.workbench.update_draft",
"handoff.command.workbench.change_model",
"handoff.command.workbench.send_message",
"handoff.command.workbench.stop_session",
"handoff.command.workbench.sync_session_status",
"handoff.command.workbench.close_session",
"handoff.command.workbench.publish_pr",
"handoff.command.workbench.revert_file",
"handoff.status_sync.result",
] as const;
export function handoffWorkflowQueueName(name: string): string {
return name;
}

View file

@ -1,10 +0,0 @@
import { actorSqliteDb } from "../../../db/actor-sqlite.js";
import * as schema from "./schema.js";
import migrations from "./migrations.js";
export const historyDb = actorSqliteDb({
actorName: "history",
schema,
migrations,
migrationsFolderUrl: new URL("./drizzle/", import.meta.url),
});

View file

@ -1,10 +0,0 @@
import { actorSqliteDb } from "../../../db/actor-sqlite.js";
import * as schema from "./schema.js";
import migrations from "./migrations.js";
export const projectDb = actorSqliteDb({
actorName: "project",
schema,
migrations,
migrationsFolderUrl: new URL("./drizzle/", import.meta.url),
});

View file

@ -1,10 +0,0 @@
import { actorSqliteDb } from "../../../db/actor-sqlite.js";
import * as schema from "./schema.js";
import migrations from "./migrations.js";
export const sandboxInstanceDb = actorSqliteDb({
actorName: "sandbox-instance",
schema,
migrations,
migrationsFolderUrl: new URL("./drizzle/", import.meta.url),
});

View file

@ -1,10 +0,0 @@
import { actorSqliteDb } from "../../../db/actor-sqlite.js";
import * as schema from "./schema.js";
import migrations from "./migrations.js";
export const workspaceDb = actorSqliteDb({
actorName: "workspace",
schema,
migrations,
migrationsFolderUrl: new URL("./drizzle/", import.meta.url),
});

View file

@ -1,4 +0,0 @@
CREATE TABLE `handoff_lookup` (
`handoff_id` text PRIMARY KEY NOT NULL,
`repo_id` text NOT NULL
);

View file

@ -1,50 +0,0 @@
// This file is generated by src/actors/_scripts/generate-actor-migrations.ts.
// Source of truth is drizzle-kit output under ./drizzle (meta/_journal.json + *.sql).
// Do not hand-edit this file.
const journal = {
entries: [
{
idx: 0,
when: 1770924376525,
tag: "0000_rare_iron_man",
breakpoints: true,
},
{
idx: 1,
when: 1770947252912,
tag: "0001_sleepy_lady_deathstrike",
breakpoints: true,
},
{
idx: 2,
when: 1772668800000,
tag: "0002_tiny_silver_surfer",
breakpoints: true,
},
],
} as const;
export default {
journal,
migrations: {
m0000: `CREATE TABLE \`provider_profiles\` (
\`provider_id\` text PRIMARY KEY NOT NULL,
\`profile_json\` text NOT NULL,
\`updated_at\` integer NOT NULL
);
`,
m0001: `CREATE TABLE \`repos\` (
\`repo_id\` text PRIMARY KEY NOT NULL,
\`remote_url\` text NOT NULL,
\`created_at\` integer NOT NULL,
\`updated_at\` integer NOT NULL
);
`,
m0002: `CREATE TABLE \`handoff_lookup\` (
\`handoff_id\` text PRIMARY KEY NOT NULL,
\`repo_id\` text NOT NULL
);
`,
} as const,
};

View file

@ -1,20 +0,0 @@
import { integer, sqliteTable, text } from "rivetkit/db/drizzle";
// SQLite is per workspace actor instance, so no workspaceId column needed.
export const providerProfiles = sqliteTable("provider_profiles", {
providerId: text("provider_id").notNull().primaryKey(),
profileJson: text("profile_json").notNull(),
updatedAt: integer("updated_at").notNull(),
});
export const repos = sqliteTable("repos", {
repoId: text("repo_id").notNull().primaryKey(),
remoteUrl: text("remote_url").notNull(),
createdAt: integer("created_at").notNull(),
updatedAt: integer("updated_at").notNull(),
});
export const handoffLookup = sqliteTable("handoff_lookup", {
handoffId: text("handoff_id").notNull().primaryKey(),
repoId: text("repo_id").notNull(),
});

View file

@ -1,102 +0,0 @@
import { mkdirSync } from "node:fs";
import { join } from "node:path";
import { fileURLToPath } from "node:url";
import { db as kvDrizzleDb } from "rivetkit/db/drizzle";
// Keep this file decoupled from RivetKit's internal type export paths.
// RivetKit consumes database providers structurally.
export interface RawAccess {
execute: (query: string, ...args: unknown[]) => Promise<unknown[]>;
close: () => Promise<void>;
}
export interface DatabaseProviderContext {
actorId: string;
}
export type DatabaseProvider<DB> = {
createClient: (ctx: DatabaseProviderContext) => Promise<DB>;
onMigrate: (client: DB) => void | Promise<void>;
onDestroy?: (client: DB) => void | Promise<void>;
};
export interface ActorSqliteDbOptions<TSchema extends Record<string, unknown>> {
actorName: string;
schema?: TSchema;
migrations?: unknown;
migrationsFolderUrl: URL;
/**
* Override base directory for per-actor SQLite files.
*
* Default: `<cwd>/.openhandoff/backend/sqlite`
*/
baseDir?: string;
}
export function actorSqliteDb<TSchema extends Record<string, unknown>>(options: ActorSqliteDbOptions<TSchema>): DatabaseProvider<any & RawAccess> {
const isBunRuntime = typeof (globalThis as any).Bun !== "undefined" && typeof (process as any)?.versions?.bun === "string";
// Backend tests run in a Node-ish Vitest environment where `bun:sqlite` and
// Bun's sqlite-backed Drizzle driver are not supported.
//
// Additionally, RivetKit's KV-backed SQLite implementation currently has stability
// issues under Bun in this repo's setup (wa-sqlite runtime errors). Prefer Bun's
// native SQLite driver in production backend execution.
if (!isBunRuntime || process.env.VITEST || process.env.NODE_ENV === "test") {
return kvDrizzleDb({
schema: options.schema,
migrations: options.migrations,
}) as unknown as DatabaseProvider<any & RawAccess>;
}
const baseDir = options.baseDir ?? join(process.cwd(), ".openhandoff", "backend", "sqlite");
const migrationsFolder = fileURLToPath(options.migrationsFolderUrl);
return {
createClient: async (ctx) => {
// Keep Bun-only module out of Vitest/Vite's static import graph.
const { Database } = await import(/* @vite-ignore */ "bun:sqlite");
const { drizzle } = await import("drizzle-orm/bun-sqlite");
const dir = join(baseDir, options.actorName);
mkdirSync(dir, { recursive: true });
const dbPath = join(dir, `${ctx.actorId}.sqlite`);
const sqlite = new Database(dbPath);
sqlite.exec("PRAGMA journal_mode = WAL;");
sqlite.exec("PRAGMA foreign_keys = ON;");
const client = drizzle({
client: sqlite,
schema: options.schema,
});
return Object.assign(client, {
execute: async (query: string, ...args: unknown[]) => {
const stmt = sqlite.query(query);
try {
return stmt.all(args as never) as unknown[];
} catch {
stmt.run(args as never);
return [];
}
},
close: async () => {
sqlite.close();
},
} satisfies RawAccess);
},
onMigrate: async (client) => {
const { migrate } = await import("drizzle-orm/bun-sqlite/migrator");
await migrate(client, {
migrationsFolder,
});
},
onDestroy: async (client) => {
await client.close();
},
};
}

View file

@ -1,141 +0,0 @@
import { Hono } from "hono";
import { cors } from "hono/cors";
import { initActorRuntimeContext } from "./actors/context.js";
import { registry } from "./actors/index.js";
import { loadConfig } from "./config/backend.js";
import { createBackends, createNotificationService } from "./notifications/index.js";
import { createDefaultDriver } from "./driver.js";
import { createProviderRegistry } from "./providers/index.js";
export interface BackendStartOptions {
host?: string;
port?: number;
}
export async function startBackend(options: BackendStartOptions = {}): Promise<void> {
// sandbox-agent agent plugins vary on which env var they read for OpenAI/Codex auth.
// Normalize to keep local dev + docker-compose simple.
if (!process.env.CODEX_API_KEY && process.env.OPENAI_API_KEY) {
process.env.CODEX_API_KEY = process.env.OPENAI_API_KEY;
}
const config = loadConfig();
config.backend.host = options.host ?? config.backend.host;
config.backend.port = options.port ?? config.backend.port;
// Allow docker-compose/dev environments to supply provider config via env vars
// instead of writing into the container's config.toml.
const envFirst = (...keys: string[]): string | undefined => {
for (const key of keys) {
const raw = process.env[key];
if (raw && raw.trim().length > 0) return raw.trim();
}
return undefined;
};
config.providers.daytona.endpoint = envFirst("HF_DAYTONA_ENDPOINT", "DAYTONA_ENDPOINT") ?? config.providers.daytona.endpoint;
config.providers.daytona.apiKey = envFirst("HF_DAYTONA_API_KEY", "DAYTONA_API_KEY") ?? config.providers.daytona.apiKey;
const driver = createDefaultDriver();
const providers = createProviderRegistry(config, driver);
const backends = await createBackends(config.notify);
const notifications = createNotificationService(backends);
initActorRuntimeContext(config, providers, notifications, driver);
const inner = registry.serve();
// Wrap in a Hono app mounted at /api/rivet to serve on the backend port.
// Uses Bun.serve — cannot use @hono/node-server because it conflicts with
// RivetKit's internal Bun.serve manager server (Bun bug: mixing Node HTTP
// server and Bun.serve in the same process breaks Bun.serve's fetch handler).
const app = new Hono();
app.use(
"/api/rivet/*",
cors({
origin: "*",
allowHeaders: ["Content-Type", "Authorization", "x-rivet-token"],
allowMethods: ["GET", "POST", "PUT", "PATCH", "DELETE", "OPTIONS"],
exposeHeaders: ["Content-Type"],
}),
);
app.use(
"/api/rivet",
cors({
origin: "*",
allowHeaders: ["Content-Type", "Authorization", "x-rivet-token"],
allowMethods: ["GET", "POST", "PUT", "PATCH", "DELETE", "OPTIONS"],
exposeHeaders: ["Content-Type"],
}),
);
const forward = async (c: any) => {
try {
// RivetKit serverless handler is configured with basePath `/api/rivet` by default.
return await inner.fetch(c.req.raw);
} catch (err) {
if (err instanceof URIError) {
return c.text("Bad Request: Malformed URI", 400);
}
throw err;
}
};
app.all("/api/rivet", forward);
app.all("/api/rivet/*", forward);
const server = Bun.serve({
fetch: app.fetch,
hostname: config.backend.host,
port: config.backend.port,
});
process.on("SIGINT", async () => {
server.stop();
process.exit(0);
});
process.on("SIGTERM", async () => {
server.stop();
process.exit(0);
});
// Keep process alive.
await new Promise<void>(() => undefined);
}
function parseArg(flag: string): string | undefined {
const idx = process.argv.indexOf(flag);
if (idx < 0) return undefined;
return process.argv[idx + 1];
}
function parseEnvPort(value: string | undefined): number | undefined {
if (!value) {
return undefined;
}
const port = Number(value);
if (!Number.isInteger(port) || port <= 0 || port > 65535) {
return undefined;
}
return port;
}
async function main(): Promise<void> {
const cmd = process.argv[2] ?? "start";
if (cmd !== "start") {
throw new Error(`Unsupported backend command: ${cmd}`);
}
const host = parseArg("--host") ?? process.env.HOST ?? process.env.HF_BACKEND_HOST;
const port = parseArg("--port") ?? process.env.PORT ?? process.env.HF_BACKEND_PORT;
await startBackend({
host,
port: parseEnvPort(port),
});
}
if (import.meta.url === `file://${process.argv[1]}`) {
main().catch((err: unknown) => {
const message = err instanceof Error ? (err.stack ?? err.message) : String(err);
console.error(message);
process.exit(1);
});
}

View file

@ -1,34 +0,0 @@
import { describe, expect, test } from "vitest";
import { normalizeRemoteUrl, repoIdFromRemote } from "../src/services/repo.js";
describe("normalizeRemoteUrl", () => {
test("accepts GitHub shorthand owner/repo", () => {
expect(normalizeRemoteUrl("rivet-dev/openhandoff")).toBe("https://github.com/rivet-dev/openhandoff.git");
});
test("accepts github.com/owner/repo without scheme", () => {
expect(normalizeRemoteUrl("github.com/rivet-dev/openhandoff")).toBe("https://github.com/rivet-dev/openhandoff.git");
});
test("canonicalizes GitHub repo URLs without .git", () => {
expect(normalizeRemoteUrl("https://github.com/rivet-dev/openhandoff")).toBe("https://github.com/rivet-dev/openhandoff.git");
});
test("canonicalizes GitHub non-clone URLs (e.g. /tree/main)", () => {
expect(normalizeRemoteUrl("https://github.com/rivet-dev/openhandoff/tree/main")).toBe("https://github.com/rivet-dev/openhandoff.git");
});
test("does not rewrite scp-style ssh remotes", () => {
expect(normalizeRemoteUrl("git@github.com:rivet-dev/openhandoff.git")).toBe("git@github.com:rivet-dev/openhandoff.git");
});
});
describe("repoIdFromRemote", () => {
test("repoId is stable across equivalent GitHub inputs", () => {
const a = repoIdFromRemote("rivet-dev/openhandoff");
const b = repoIdFromRemote("https://github.com/rivet-dev/openhandoff.git");
const c = repoIdFromRemote("https://github.com/rivet-dev/openhandoff/tree/main");
expect(a).toBe(b);
expect(b).toBe(c);
});
});

View file

@ -1,78 +0,0 @@
import { Database } from "bun:sqlite";
import { TO_CLIENT_VERSIONED, decodeWorkflowHistoryTransport } from "rivetkit/inspector";
const targets = [
{ actorId: "2e443238457137bf", handoffId: "7df7656e-bbd2-4b8c-bf0f-30d4df2f619a" },
{ actorId: "0e53dd77ef06862f", handoffId: "0e01a31c-2dc1-4a1d-8ab0-9f0816359a85" },
{ actorId: "ea8c0e764c836e5f", handoffId: "cdc22436-4020-4f73-b3e7-7782fec29ae4" },
];
function decodeAscii(u8) {
return new TextDecoder().decode(u8).replace(/[\x00-\x1F\x7F-\xFF]/g, ".");
}
function locationToNames(entry, names) {
return entry.location.map((seg) => {
if (seg.tag === "WorkflowNameIndex") return names[seg.val] ?? `#${seg.val}`;
if (seg.tag === "WorkflowLoopIterationMarker") return `iter(${seg.val.iteration})`;
return seg.tag;
});
}
for (const t of targets) {
const db = new Database(`/root/.local/share/openhandoff/rivetkit/databases/${t.actorId}.db`, { readonly: true });
const token = new TextDecoder().decode(db.query("SELECT value FROM kv WHERE hex(key)=?").get("03").value);
await new Promise((resolve, reject) => {
const ws = new WebSocket(`ws://127.0.0.1:7750/gateway/${t.actorId}/inspector/connect`, [`rivet_inspector_token.${token}`]);
ws.binaryType = "arraybuffer";
const to = setTimeout(() => reject(new Error("timeout")), 15000);
ws.onmessage = (ev) => {
const data = ev.data instanceof ArrayBuffer ? new Uint8Array(ev.data) : new Uint8Array(ev.data.buffer);
const msg = TO_CLIENT_VERSIONED.deserializeWithEmbeddedVersion(data);
if (msg.body.tag !== "Init") return;
const wh = decodeWorkflowHistoryTransport(msg.body.val.workflowHistory);
const entryMetadata = wh.entryMetadata;
const enriched = wh.entries.map((e) => {
const meta = entryMetadata.get(e.id);
return {
id: e.id,
path: locationToNames(e, wh.nameRegistry).join("/"),
kind: e.kind.tag,
status: meta?.status ?? null,
error: meta?.error ?? null,
attempts: meta?.attempts ?? null,
entryError: e.kind.tag === "WorkflowStepEntry" ? (e.kind.val.error ?? null) : null,
};
});
const wfStateRow = db.query("SELECT value FROM kv WHERE hex(key)=?").get("0715041501");
const wfState = wfStateRow?.value ? decodeAscii(new Uint8Array(wfStateRow.value)) : null;
console.log(
JSON.stringify(
{
handoffId: t.handoffId,
actorId: t.actorId,
wfState,
names: wh.nameRegistry,
entries: enriched,
},
null,
2,
),
);
clearTimeout(to);
ws.close();
resolve();
};
ws.onerror = (err) => {
clearTimeout(to);
reject(err);
};
});
}

View file

@ -1,10 +0,0 @@
import { Database } from "bun:sqlite";
const db = new Database("/root/.local/share/openhandoff/rivetkit/databases/2e443238457137bf.db", { readonly: true });
const rows = db.query("SELECT hex(key) as k, value as v FROM kv WHERE hex(key) LIKE ? ORDER BY key").all("07%");
const out = rows.map((r) => {
const bytes = new Uint8Array(r.v);
const txt = new TextDecoder().decode(bytes).replace(/[\x00-\x1F\x7F-\xFF]/g, ".");
return { k: r.k, vlen: bytes.length, txt: txt.slice(0, 260) };
});
console.log(JSON.stringify(out, null, 2));

View file

@ -1,87 +0,0 @@
import { Database } from "bun:sqlite";
import { TO_CLIENT_VERSIONED, TO_SERVER_VERSIONED, CURRENT_VERSION, decodeWorkflowHistoryTransport } from "rivetkit/inspector";
import { decodeReadRangeWire } from "/rivet-handoff-fixes/rivetkit-typescript/packages/traces/src/encoding.ts";
import { readRangeWireToOtlp } from "/rivet-handoff-fixes/rivetkit-typescript/packages/traces/src/read-range.ts";
const actorId = "2e443238457137bf";
const db = new Database(`/root/.local/share/openhandoff/rivetkit/databases/${actorId}.db`, { readonly: true });
const row = db.query("SELECT value FROM kv WHERE hex(key)=?").get("03");
const token = new TextDecoder().decode(row.value);
const ws = new WebSocket(`ws://127.0.0.1:7750/gateway/${actorId}/inspector/connect`, [`rivet_inspector_token.${token}`]);
ws.binaryType = "arraybuffer";
let sent = false;
const timeout = setTimeout(() => {
console.error("timeout");
process.exit(2);
}, 20000);
function send(body) {
const bytes = TO_SERVER_VERSIONED.serializeWithEmbeddedVersion({ body }, CURRENT_VERSION);
ws.send(bytes);
}
ws.onmessage = (ev) => {
const data = ev.data instanceof ArrayBuffer ? new Uint8Array(ev.data) : new Uint8Array(ev.data.buffer);
const msg = TO_CLIENT_VERSIONED.deserializeWithEmbeddedVersion(data);
if (!sent && msg.body.tag === "Init") {
const init = msg.body.val;
const wh = decodeWorkflowHistoryTransport(init.workflowHistory);
const queueSize = Number(init.queueSize);
console.log(JSON.stringify({ tag: "InitSummary", queueSize, rpcs: init.rpcs, historyEntries: wh.entries.length, names: wh.nameRegistry }, null, 2));
send({ tag: "QueueRequest", val: { id: 1n, limit: 20n } });
send({ tag: "WorkflowHistoryRequest", val: { id: 2n } });
send({ tag: "TraceQueryRequest", val: { id: 3n, startMs: 0n, endMs: BigInt(Date.now()), limit: 2000n } });
sent = true;
return;
}
if (msg.body.tag === "QueueResponse") {
const status = msg.body.val.status;
console.log(
JSON.stringify(
{
tag: "QueueResponse",
size: Number(status.size),
truncated: status.truncated,
messages: status.messages.map((m) => ({ id: Number(m.id), name: m.name, createdAtMs: Number(m.createdAtMs) })),
},
null,
2,
),
);
return;
}
if (msg.body.tag === "WorkflowHistoryResponse") {
const wh = decodeWorkflowHistoryTransport(msg.body.val.history);
console.log(
JSON.stringify(
{ tag: "WorkflowHistoryResponse", isWorkflowEnabled: msg.body.val.isWorkflowEnabled, entryCount: wh.entries.length, names: wh.nameRegistry },
null,
2,
),
);
return;
}
if (msg.body.tag === "TraceQueryResponse") {
const wire = decodeReadRangeWire(new Uint8Array(msg.body.val.payload));
const otlp = readRangeWireToOtlp(wire, { attributes: [], droppedAttributesCount: 0 });
const spans = (((otlp?.resourceSpans ?? [])[0]?.scopeSpans ?? [])[0]?.spans ?? []).map((s) => ({ name: s.name, status: s.status?.code }));
console.log(JSON.stringify({ tag: "TraceQueryResponse", spanCount: spans.length, tail: spans.slice(-25) }, null, 2));
clearTimeout(timeout);
ws.close();
process.exit(0);
return;
}
};
ws.onerror = (e) => {
console.error("ws error", e);
clearTimeout(timeout);
process.exit(1);
};

View file

@ -1,51 +0,0 @@
import { Database } from "bun:sqlite";
const actorIds = [
"2e443238457137bf", // 7df...
"2b3fe1c099327eed", // 706...
"331b7f2a0cd19973", // 70c...
"329a70fc689f56ca", // 1f14...
"0e53dd77ef06862f", // 0e01...
"ea8c0e764c836e5f", // cdc error
];
function decodeAscii(u8) {
return new TextDecoder().decode(u8).replace(/[\x00-\x1F\x7F-\xFF]/g, ".");
}
for (const actorId of actorIds) {
const dbPath = `/root/.local/share/openhandoff/rivetkit/databases/${actorId}.db`;
const db = new Database(dbPath, { readonly: true });
const wfStateRow = db.query("SELECT value FROM kv WHERE hex(key)=?").get("0715041501");
const wfState = wfStateRow?.value ? decodeAscii(new Uint8Array(wfStateRow.value)) : null;
const names = db
.query("SELECT value FROM kv WHERE hex(key) LIKE ? ORDER BY key")
.all("07150115%")
.map((r) => decodeAscii(new Uint8Array(r.value)));
const queueRows = db
.query("SELECT hex(key) as k, value FROM kv WHERE hex(key) LIKE ? ORDER BY key")
.all("05%")
.map((r) => ({
key: r.k,
preview: decodeAscii(new Uint8Array(r.value)).slice(0, 220),
}));
const hasCreateSandboxStepName = names.includes("init-create-sandbox") || names.includes("init_create_sandbox");
console.log(
JSON.stringify(
{
actorId,
wfState,
hasCreateSandboxStepName,
names,
queue: queueRows,
},
null,
2,
),
);
}

View file

@ -1,30 +0,0 @@
import { Database } from "bun:sqlite";
import { TO_CLIENT_VERSIONED, decodeWorkflowHistoryTransport } from "rivetkit/inspector";
import util from "node:util";
const actorId = "2e443238457137bf";
const db = new Database(`/root/.local/share/openhandoff/rivetkit/databases/${actorId}.db`, { readonly: true });
const row = db.query("SELECT value FROM kv WHERE hex(key) = ?").get("03");
const token = new TextDecoder().decode(row.value);
const ws = new WebSocket(`ws://127.0.0.1:7750/gateway/${actorId}/inspector/connect`, [`rivet_inspector_token.${token}`]);
ws.binaryType = "arraybuffer";
const timeout = setTimeout(() => process.exit(2), 15000);
ws.onmessage = (ev) => {
const data = ev.data instanceof ArrayBuffer ? new Uint8Array(ev.data) : new Uint8Array(ev.data.buffer);
const msg = TO_CLIENT_VERSIONED.deserializeWithEmbeddedVersion(data);
const init = msg.body?.tag === "Init" ? msg.body.val : null;
if (!init) {
console.log("unexpected", util.inspect(msg, { depth: 4 }));
process.exit(1);
}
const decoded = decodeWorkflowHistoryTransport(init.workflowHistory);
console.log(util.inspect(decoded, { depth: 10, colors: false, compact: false, breakLength: 140 }));
clearTimeout(timeout);
ws.close();
process.exit(0);
};
ws.onerror = () => {
clearTimeout(timeout);
process.exit(1);
};

View file

@ -1,443 +0,0 @@
import {
MODEL_GROUPS,
buildInitialMockLayoutViewModel,
groupWorkbenchProjects,
nowMs,
providerAgent,
randomReply,
removeFileTreePath,
slugify,
uid,
} from "../workbench-model.js";
import type {
HandoffWorkbenchAddTabResponse,
HandoffWorkbenchChangeModelInput,
HandoffWorkbenchCreateHandoffInput,
HandoffWorkbenchCreateHandoffResponse,
HandoffWorkbenchDiffInput,
HandoffWorkbenchRenameInput,
HandoffWorkbenchRenameSessionInput,
HandoffWorkbenchSelectInput,
HandoffWorkbenchSetSessionUnreadInput,
HandoffWorkbenchSendMessageInput,
HandoffWorkbenchSnapshot,
HandoffWorkbenchTabInput,
HandoffWorkbenchUpdateDraftInput,
WorkbenchAgentTab as AgentTab,
WorkbenchHandoff as Handoff,
WorkbenchTranscriptEvent as TranscriptEvent,
} from "@openhandoff/shared";
import type { HandoffWorkbenchClient } from "../workbench-client.js";
function buildTranscriptEvent(params: {
sessionId: string;
sender: "client" | "agent";
createdAt: number;
payload: unknown;
eventIndex: number;
}): TranscriptEvent {
return {
id: uid(),
sessionId: params.sessionId,
sender: params.sender,
createdAt: params.createdAt,
payload: params.payload,
connectionId: "mock-connection",
eventIndex: params.eventIndex,
};
}
class MockWorkbenchStore implements HandoffWorkbenchClient {
private snapshot = buildInitialMockLayoutViewModel();
private listeners = new Set<() => void>();
private pendingTimers = new Map<string, ReturnType<typeof setTimeout>>();
getSnapshot(): HandoffWorkbenchSnapshot {
return this.snapshot;
}
subscribe(listener: () => void): () => void {
this.listeners.add(listener);
return () => {
this.listeners.delete(listener);
};
}
async createHandoff(input: HandoffWorkbenchCreateHandoffInput): Promise<HandoffWorkbenchCreateHandoffResponse> {
const id = uid();
const tabId = `session-${id}`;
const repo = this.snapshot.repos.find((candidate) => candidate.id === input.repoId);
if (!repo) {
throw new Error(`Cannot create mock handoff for unknown repo ${input.repoId}`);
}
const nextHandoff: Handoff = {
id,
repoId: repo.id,
title: input.title?.trim() || "New Handoff",
status: "new",
repoName: repo.label,
updatedAtMs: nowMs(),
branch: input.branch?.trim() || null,
pullRequest: null,
tabs: [
{
id: tabId,
sessionId: tabId,
sessionName: "Session 1",
agent: providerAgent(
MODEL_GROUPS.find((group) => group.models.some((model) => model.id === (input.model ?? "claude-sonnet-4")))?.provider ?? "Claude",
),
model: input.model ?? "claude-sonnet-4",
status: "idle",
thinkingSinceMs: null,
unread: false,
created: false,
draft: { text: "", attachments: [], updatedAtMs: null },
transcript: [],
},
],
fileChanges: [],
diffs: {},
fileTree: [],
};
this.updateState((current) => ({
...current,
handoffs: [nextHandoff, ...current.handoffs],
}));
return { handoffId: id, tabId };
}
async markHandoffUnread(input: HandoffWorkbenchSelectInput): Promise<void> {
this.updateHandoff(input.handoffId, (handoff) => {
const targetTab = handoff.tabs[handoff.tabs.length - 1] ?? null;
if (!targetTab) {
return handoff;
}
return {
...handoff,
tabs: handoff.tabs.map((tab) => (tab.id === targetTab.id ? { ...tab, unread: true } : tab)),
};
});
}
async renameHandoff(input: HandoffWorkbenchRenameInput): Promise<void> {
const value = input.value.trim();
if (!value) {
throw new Error(`Cannot rename handoff ${input.handoffId} to an empty title`);
}
this.updateHandoff(input.handoffId, (handoff) => ({ ...handoff, title: value, updatedAtMs: nowMs() }));
}
async renameBranch(input: HandoffWorkbenchRenameInput): Promise<void> {
const value = input.value.trim();
if (!value) {
throw new Error(`Cannot rename branch for handoff ${input.handoffId} to an empty value`);
}
this.updateHandoff(input.handoffId, (handoff) => ({ ...handoff, branch: value, updatedAtMs: nowMs() }));
}
async archiveHandoff(input: HandoffWorkbenchSelectInput): Promise<void> {
this.updateHandoff(input.handoffId, (handoff) => ({ ...handoff, status: "archived", updatedAtMs: nowMs() }));
}
async publishPr(input: HandoffWorkbenchSelectInput): Promise<void> {
const nextPrNumber = Math.max(0, ...this.snapshot.handoffs.map((handoff) => handoff.pullRequest?.number ?? 0)) + 1;
this.updateHandoff(input.handoffId, (handoff) => ({
...handoff,
updatedAtMs: nowMs(),
pullRequest: { number: nextPrNumber, status: "ready" },
}));
}
async revertFile(input: HandoffWorkbenchDiffInput): Promise<void> {
this.updateHandoff(input.handoffId, (handoff) => {
const file = handoff.fileChanges.find((entry) => entry.path === input.path);
const nextDiffs = { ...handoff.diffs };
delete nextDiffs[input.path];
return {
...handoff,
fileChanges: handoff.fileChanges.filter((entry) => entry.path !== input.path),
diffs: nextDiffs,
fileTree: file?.type === "A" ? removeFileTreePath(handoff.fileTree, input.path) : handoff.fileTree,
};
});
}
async updateDraft(input: HandoffWorkbenchUpdateDraftInput): Promise<void> {
this.assertTab(input.handoffId, input.tabId);
this.updateHandoff(input.handoffId, (handoff) => ({
...handoff,
updatedAtMs: nowMs(),
tabs: handoff.tabs.map((tab) =>
tab.id === input.tabId
? {
...tab,
draft: {
text: input.text,
attachments: input.attachments,
updatedAtMs: nowMs(),
},
}
: tab,
),
}));
}
async sendMessage(input: HandoffWorkbenchSendMessageInput): Promise<void> {
const text = input.text.trim();
if (!text) {
throw new Error(`Cannot send an empty mock prompt for handoff ${input.handoffId}`);
}
this.assertTab(input.handoffId, input.tabId);
const startedAtMs = nowMs();
this.updateHandoff(input.handoffId, (currentHandoff) => {
const isFirstOnHandoff = currentHandoff.status === "new";
const newTitle = isFirstOnHandoff ? (text.length > 50 ? `${text.slice(0, 47)}...` : text) : currentHandoff.title;
const newBranch = isFirstOnHandoff ? `feat/${slugify(newTitle)}` : currentHandoff.branch;
const userMessageLines = [text, ...input.attachments.map((attachment) => `@ ${attachment.filePath}:${attachment.lineNumber}`)];
const userEvent = buildTranscriptEvent({
sessionId: input.tabId,
sender: "client",
createdAt: startedAtMs,
eventIndex: candidateEventIndex(currentHandoff, input.tabId),
payload: {
method: "session/prompt",
params: {
prompt: userMessageLines.map((line) => ({ type: "text", text: line })),
},
},
});
return {
...currentHandoff,
title: newTitle,
branch: newBranch,
status: "running",
updatedAtMs: startedAtMs,
tabs: currentHandoff.tabs.map((candidate) =>
candidate.id === input.tabId
? {
...candidate,
created: true,
status: "running",
unread: false,
thinkingSinceMs: startedAtMs,
draft: { text: "", attachments: [], updatedAtMs: startedAtMs },
transcript: [...candidate.transcript, userEvent],
}
: candidate,
),
};
});
const existingTimer = this.pendingTimers.get(input.tabId);
if (existingTimer) {
clearTimeout(existingTimer);
}
const timer = setTimeout(() => {
const handoff = this.requireHandoff(input.handoffId);
const replyTab = this.requireTab(handoff, input.tabId);
const completedAtMs = nowMs();
const replyEvent = buildTranscriptEvent({
sessionId: input.tabId,
sender: "agent",
createdAt: completedAtMs,
eventIndex: candidateEventIndex(handoff, input.tabId),
payload: {
result: {
text: randomReply(),
durationMs: completedAtMs - startedAtMs,
},
},
});
this.updateHandoff(input.handoffId, (currentHandoff) => {
const updatedTabs = currentHandoff.tabs.map((candidate) => {
if (candidate.id !== input.tabId) {
return candidate;
}
return {
...candidate,
status: "idle" as const,
thinkingSinceMs: null,
unread: true,
transcript: [...candidate.transcript, replyEvent],
};
});
const anyRunning = updatedTabs.some((candidate) => candidate.status === "running");
return {
...currentHandoff,
updatedAtMs: completedAtMs,
tabs: updatedTabs,
status: currentHandoff.status === "archived" ? "archived" : anyRunning ? "running" : "idle",
};
});
this.pendingTimers.delete(input.tabId);
}, 2_500);
this.pendingTimers.set(input.tabId, timer);
}
async stopAgent(input: HandoffWorkbenchTabInput): Promise<void> {
this.assertTab(input.handoffId, input.tabId);
const existing = this.pendingTimers.get(input.tabId);
if (existing) {
clearTimeout(existing);
this.pendingTimers.delete(input.tabId);
}
this.updateHandoff(input.handoffId, (currentHandoff) => {
const updatedTabs = currentHandoff.tabs.map((candidate) =>
candidate.id === input.tabId ? { ...candidate, status: "idle" as const, thinkingSinceMs: null } : candidate,
);
const anyRunning = updatedTabs.some((candidate) => candidate.status === "running");
return {
...currentHandoff,
updatedAtMs: nowMs(),
tabs: updatedTabs,
status: currentHandoff.status === "archived" ? "archived" : anyRunning ? "running" : "idle",
};
});
}
async setSessionUnread(input: HandoffWorkbenchSetSessionUnreadInput): Promise<void> {
this.updateHandoff(input.handoffId, (currentHandoff) => ({
...currentHandoff,
tabs: currentHandoff.tabs.map((candidate) => (candidate.id === input.tabId ? { ...candidate, unread: input.unread } : candidate)),
}));
}
async renameSession(input: HandoffWorkbenchRenameSessionInput): Promise<void> {
const title = input.title.trim();
if (!title) {
throw new Error(`Cannot rename session ${input.tabId} to an empty title`);
}
this.updateHandoff(input.handoffId, (currentHandoff) => ({
...currentHandoff,
tabs: currentHandoff.tabs.map((candidate) => (candidate.id === input.tabId ? { ...candidate, sessionName: title } : candidate)),
}));
}
async closeTab(input: HandoffWorkbenchTabInput): Promise<void> {
this.updateHandoff(input.handoffId, (currentHandoff) => {
if (currentHandoff.tabs.length <= 1) {
return currentHandoff;
}
return {
...currentHandoff,
tabs: currentHandoff.tabs.filter((candidate) => candidate.id !== input.tabId),
};
});
}
async addTab(input: HandoffWorkbenchSelectInput): Promise<HandoffWorkbenchAddTabResponse> {
this.assertHandoff(input.handoffId);
const nextTab: AgentTab = {
id: uid(),
sessionId: null,
sessionName: `Session ${this.requireHandoff(input.handoffId).tabs.length + 1}`,
agent: "Claude",
model: "claude-sonnet-4",
status: "idle",
thinkingSinceMs: null,
unread: false,
created: false,
draft: { text: "", attachments: [], updatedAtMs: null },
transcript: [],
};
this.updateHandoff(input.handoffId, (currentHandoff) => ({
...currentHandoff,
updatedAtMs: nowMs(),
tabs: [...currentHandoff.tabs, nextTab],
}));
return { tabId: nextTab.id };
}
async changeModel(input: HandoffWorkbenchChangeModelInput): Promise<void> {
const group = MODEL_GROUPS.find((candidate) => candidate.models.some((entry) => entry.id === input.model));
if (!group) {
throw new Error(`Unable to resolve model provider for ${input.model}`);
}
this.updateHandoff(input.handoffId, (currentHandoff) => ({
...currentHandoff,
tabs: currentHandoff.tabs.map((candidate) =>
candidate.id === input.tabId ? { ...candidate, model: input.model, agent: providerAgent(group.provider) } : candidate,
),
}));
}
private updateState(updater: (current: HandoffWorkbenchSnapshot) => HandoffWorkbenchSnapshot): void {
const nextSnapshot = updater(this.snapshot);
this.snapshot = {
...nextSnapshot,
projects: groupWorkbenchProjects(nextSnapshot.repos, nextSnapshot.handoffs),
};
this.notify();
}
private updateHandoff(handoffId: string, updater: (handoff: Handoff) => Handoff): void {
this.assertHandoff(handoffId);
this.updateState((current) => ({
...current,
handoffs: current.handoffs.map((handoff) => (handoff.id === handoffId ? updater(handoff) : handoff)),
}));
}
private notify(): void {
for (const listener of this.listeners) {
listener();
}
}
private assertHandoff(handoffId: string): void {
this.requireHandoff(handoffId);
}
private assertTab(handoffId: string, tabId: string): void {
const handoff = this.requireHandoff(handoffId);
this.requireTab(handoff, tabId);
}
private requireHandoff(handoffId: string): Handoff {
const handoff = this.snapshot.handoffs.find((candidate) => candidate.id === handoffId);
if (!handoff) {
throw new Error(`Unable to find mock handoff ${handoffId}`);
}
return handoff;
}
private requireTab(handoff: Handoff, tabId: string): AgentTab {
const tab = handoff.tabs.find((candidate) => candidate.id === tabId);
if (!tab) {
throw new Error(`Unable to find mock tab ${tabId} in handoff ${handoff.id}`);
}
return tab;
}
}
function candidateEventIndex(handoff: Handoff, tabId: string): number {
const tab = handoff.tabs.find((candidate) => candidate.id === tabId);
return (tab?.transcript.length ?? 0) + 1;
}
let sharedMockWorkbenchClient: HandoffWorkbenchClient | null = null;
export function getSharedMockWorkbenchClient(): HandoffWorkbenchClient {
if (!sharedMockWorkbenchClient) {
sharedMockWorkbenchClient = new MockWorkbenchStore();
}
return sharedMockWorkbenchClient;
}

View file

@ -1,64 +0,0 @@
import type {
HandoffWorkbenchAddTabResponse,
HandoffWorkbenchChangeModelInput,
HandoffWorkbenchCreateHandoffInput,
HandoffWorkbenchCreateHandoffResponse,
HandoffWorkbenchDiffInput,
HandoffWorkbenchRenameInput,
HandoffWorkbenchRenameSessionInput,
HandoffWorkbenchSelectInput,
HandoffWorkbenchSetSessionUnreadInput,
HandoffWorkbenchSendMessageInput,
HandoffWorkbenchSnapshot,
HandoffWorkbenchTabInput,
HandoffWorkbenchUpdateDraftInput,
} from "@openhandoff/shared";
import type { BackendClient } from "./backend-client.js";
import { getSharedMockWorkbenchClient } from "./mock/workbench-client.js";
import { createRemoteWorkbenchClient } from "./remote/workbench-client.js";
export type HandoffWorkbenchClientMode = "mock" | "remote";
export interface CreateHandoffWorkbenchClientOptions {
mode: HandoffWorkbenchClientMode;
backend?: BackendClient;
workspaceId?: string;
}
export interface HandoffWorkbenchClient {
getSnapshot(): HandoffWorkbenchSnapshot;
subscribe(listener: () => void): () => void;
createHandoff(input: HandoffWorkbenchCreateHandoffInput): Promise<HandoffWorkbenchCreateHandoffResponse>;
markHandoffUnread(input: HandoffWorkbenchSelectInput): Promise<void>;
renameHandoff(input: HandoffWorkbenchRenameInput): Promise<void>;
renameBranch(input: HandoffWorkbenchRenameInput): Promise<void>;
archiveHandoff(input: HandoffWorkbenchSelectInput): Promise<void>;
publishPr(input: HandoffWorkbenchSelectInput): Promise<void>;
revertFile(input: HandoffWorkbenchDiffInput): Promise<void>;
updateDraft(input: HandoffWorkbenchUpdateDraftInput): Promise<void>;
sendMessage(input: HandoffWorkbenchSendMessageInput): Promise<void>;
stopAgent(input: HandoffWorkbenchTabInput): Promise<void>;
setSessionUnread(input: HandoffWorkbenchSetSessionUnreadInput): Promise<void>;
renameSession(input: HandoffWorkbenchRenameSessionInput): Promise<void>;
closeTab(input: HandoffWorkbenchTabInput): Promise<void>;
addTab(input: HandoffWorkbenchSelectInput): Promise<HandoffWorkbenchAddTabResponse>;
changeModel(input: HandoffWorkbenchChangeModelInput): Promise<void>;
}
export function createHandoffWorkbenchClient(options: CreateHandoffWorkbenchClientOptions): HandoffWorkbenchClient {
if (options.mode === "mock") {
return getSharedMockWorkbenchClient();
}
if (!options.backend) {
throw new Error("Remote handoff workbench client requires a backend client");
}
if (!options.workspaceId) {
throw new Error("Remote handoff workbench client requires a workspace id");
}
return createRemoteWorkbenchClient({
backend: options.backend,
workspaceId: options.workspaceId,
});
}

View file

@ -1,130 +0,0 @@
import { useEffect } from "react";
import { setFrontendErrorContext } from "@openhandoff/frontend-errors/client";
import { Navigate, Outlet, createRootRoute, createRoute, createRouter, useRouterState } from "@tanstack/react-router";
import { MockLayout } from "../components/mock-layout";
import { defaultWorkspaceId } from "../lib/env";
import { handoffWorkbenchClient } from "../lib/workbench";
const rootRoute = createRootRoute({
component: RootLayout,
});
const indexRoute = createRoute({
getParentRoute: () => rootRoute,
path: "/",
component: () => <Navigate to="/workspaces/$workspaceId" params={{ workspaceId: defaultWorkspaceId }} replace />,
});
const workspaceRoute = createRoute({
getParentRoute: () => rootRoute,
path: "/workspaces/$workspaceId",
component: WorkspaceLayoutRoute,
});
const workspaceIndexRoute = createRoute({
getParentRoute: () => workspaceRoute,
path: "/",
component: WorkspaceRoute,
});
const handoffRoute = createRoute({
getParentRoute: () => workspaceRoute,
path: "handoffs/$handoffId",
validateSearch: (search: Record<string, unknown>) => ({
sessionId: typeof search.sessionId === "string" && search.sessionId.trim().length > 0 ? search.sessionId : undefined,
}),
component: HandoffRoute,
});
const repoRoute = createRoute({
getParentRoute: () => workspaceRoute,
path: "repos/$repoId",
component: RepoRoute,
});
const routeTree = rootRoute.addChildren([indexRoute, workspaceRoute.addChildren([workspaceIndexRoute, handoffRoute, repoRoute])]);
export const router = createRouter({ routeTree });
declare module "@tanstack/react-router" {
interface Register {
router: typeof router;
}
}
function WorkspaceLayoutRoute() {
return <Outlet />;
}
function WorkspaceRoute() {
const { workspaceId } = workspaceRoute.useParams();
useEffect(() => {
setFrontendErrorContext({
workspaceId,
handoffId: undefined,
});
}, [workspaceId]);
return <MockLayout workspaceId={workspaceId} selectedHandoffId={null} selectedSessionId={null} />;
}
function HandoffRoute() {
const { workspaceId, handoffId } = handoffRoute.useParams();
const { sessionId } = handoffRoute.useSearch();
useEffect(() => {
setFrontendErrorContext({
workspaceId,
handoffId,
repoId: undefined,
});
}, [handoffId, workspaceId]);
return <MockLayout workspaceId={workspaceId} selectedHandoffId={handoffId} selectedSessionId={sessionId ?? null} />;
}
function RepoRoute() {
const { workspaceId, repoId } = repoRoute.useParams();
useEffect(() => {
setFrontendErrorContext({
workspaceId,
handoffId: undefined,
repoId,
});
}, [repoId, workspaceId]);
const activeHandoffId = handoffWorkbenchClient.getSnapshot().handoffs.find((handoff) => handoff.repoId === repoId)?.id;
if (!activeHandoffId) {
return <Navigate to="/workspaces/$workspaceId" params={{ workspaceId }} replace />;
}
return (
<Navigate
to="/workspaces/$workspaceId/handoffs/$handoffId"
params={{
workspaceId,
handoffId: activeHandoffId,
}}
search={{ sessionId: undefined }}
replace
/>
);
}
function RootLayout() {
return (
<>
<RouteContextSync />
<Outlet />
</>
);
}
function RouteContextSync() {
const location = useRouterState({
select: (state) => state.location,
});
useEffect(() => {
setFrontendErrorContext({
route: `${location.pathname}${location.search}${location.hash}`,
});
}, [location.hash, location.pathname, location.search]);
return null;
}

View file

@ -27,27 +27,23 @@ Use `pnpm` workspaces and Turborepo.
- `packages/cli` is fully disabled for active development.
- Do not implement new behavior in `packages/cli` unless explicitly requested.
- Frontend is the primary product surface; prioritize `packages/frontend` + supporting `packages/client`/`packages/backend`.
- Workspace `build`, `typecheck`, and `test` intentionally exclude `@openhandoff/cli`.
- Workspace `build`, `typecheck`, and `test` intentionally exclude `@sandbox-agent/foundry-cli`.
- `pnpm-workspace.yaml` excludes `packages/cli` from workspace package resolution.
## Common Commands
- Foundry is the canonical name for this product tree. Do not introduce or preserve legacy pre-Foundry naming in code, docs, commands, or runtime paths.
- Install deps: `pnpm install`
- Full active-workspace validation: `pnpm -w typecheck`, `pnpm -w build`, `pnpm -w test`
- Start the full dev stack: `just factory-dev`
- Start the local production-build preview stack: `just factory-preview`
- Start only the backend locally: `just factory-backend-start`
- Start only the frontend locally: `pnpm --filter @openhandoff/frontend dev`
- Start the frontend against the mock workbench client: `OPENHANDOFF_FRONTEND_CLIENT_MODE=mock pnpm --filter @openhandoff/frontend dev`
- Stop the compose dev stack: `just factory-dev-down`
- Tail compose logs: `just factory-dev-logs`
- Stop the preview stack: `just factory-preview-down`
- Tail preview logs: `just factory-preview-logs`
## Local Env
- For local The Foundry dev server setup, keep a personal env copy at `~/misc/the-foundry.env`.
- To run the dev server from this workspace, copy that content into the repo root `.env`. Root `.env` is gitignored in this repo, so keep local secrets there and do not commit them.
- Start the full dev stack: `just foundry-dev`
- Start the local production-build preview stack: `just foundry-preview`
- Start only the backend locally: `just foundry-backend-start`
- Start only the frontend locally: `pnpm --filter @sandbox-agent/foundry-frontend dev`
- Start the frontend against the mock workbench client: `FOUNDRY_FRONTEND_CLIENT_MODE=mock pnpm --filter @sandbox-agent/foundry-frontend dev`
- Stop the compose dev stack: `just foundry-dev-down`
- Tail compose logs: `just foundry-dev-logs`
- Stop the preview stack: `just foundry-preview-down`
- Tail preview logs: `just foundry-preview-logs`
## Frontend + Client Boundary
@ -85,12 +81,12 @@ For all Rivet/RivetKit implementation:
2. SQLite is **per actor instance** (per actor key), not a shared backend-global database:
- Each actor instance gets its own SQLite DB.
- Schema design should assume a single actor instance owns the entire DB.
- Do not add `workspaceId`/`repoId`/`handoffId` columns just to "namespace" rows for a given actor instance; use actor state and/or the actor key instead.
- Example: the `handoff` actor instance already represents `(workspaceId, repoId, handoffId)`, so its SQLite tables should not need those columns for primary keys.
- Do not add `workspaceId`/`repoId`/`taskId` columns just to "namespace" rows for a given actor instance; use actor state and/or the actor key instead.
- Example: the `task` actor instance already represents `(workspaceId, repoId, taskId)`, so its SQLite tables should not need those columns for primary keys.
3. Do not use backend-global SQLite singletons; database access must go through actor `db` providers (`c.db`).
4. The default dependency source for RivetKit is the published `rivetkit` package so workspace installs and CI remain self-contained.
5. When working on coordinated RivetKit changes, you may temporarily relink to a local checkout instead of the published package.
- Dedicated local checkout for this workspace: `/Users/nathan/conductor/workspaces/handoff/rivet-checkout`
- Dedicated local checkout for this workspace: `/Users/nathan/conductor/workspaces/task/rivet-checkout`
- Preferred local link target: `../rivet-checkout/rivetkit-typescript/packages/rivetkit`
- Sub-packages (`@rivetkit/sqlite-vfs`, etc.) resolve transitively from the RivetKit workspace when using the local checkout.
6. Before using a local checkout, build RivetKit in the rivet repo:
@ -108,7 +104,7 @@ For all Rivet/RivetKit implementation:
curl -sS http://127.0.0.1:7741/api/rivet/metadata | jq -r '.clientEndpoint'
```
- List actors:
- `GET {manager}/actors?name=handoff`
- `GET {manager}/actors?name=task`
- Inspector endpoints (path prefix: `/gateway/{actorId}/inspector`):
- `GET /state`
- `PATCH /state`
@ -122,12 +118,12 @@ For all Rivet/RivetKit implementation:
- Auth:
- Production: send `Authorization: Bearer $RIVET_INSPECTOR_TOKEN`.
- Development: auth can be skipped when no inspector token is configured.
- Handoff workflow quick inspect:
- Task workflow quick inspect:
```bash
MGR="$(curl -sS http://127.0.0.1:7741/api/rivet/metadata | jq -r '.clientEndpoint')"
HID="7df7656e-bbd2-4b8c-bf0f-30d4df2f619a"
AID="$(curl -sS "$MGR/actors?name=handoff" \
| jq -r --arg hid "$HID" '.actors[] | select(.key | endswith("/handoff/\($hid)")) | .actor_id' \
AID="$(curl -sS "$MGR/actors?name=task" \
| jq -r --arg hid "$HID" '.actors[] | select(.key | endswith("/task/\($hid)")) | .actor_id' \
| head -n1)"
curl -sS "$MGR/gateway/$AID/inspector/workflow-history" | jq .
curl -sS "$MGR/gateway/$AID/inspector/summary" | jq .
@ -140,11 +136,11 @@ For all Rivet/RivetKit implementation:
- Workspace resolution order: `--workspace` flag -> config default -> `"default"`.
- `ControlPlaneActor` is replaced by `WorkspaceActor` (workspace coordinator).
- Every actor key must be prefixed with workspace namespace (`["ws", workspaceId, ...]`).
- CLI/TUI/GUI must use `@openhandoff/client` (`packages/client`) for backend access; `rivetkit/client` imports are only allowed inside `packages/client`.
- CLI/TUI/GUI must use `@sandbox-agent/foundry-client` (`packages/client`) for backend access; `rivetkit/client` imports are only allowed inside `packages/client`.
- Do not add custom backend REST endpoints (no `/v1/*` shim layer).
- We own the sandbox-agent project; treat sandbox-agent defects as first-party bugs and fix them instead of working around them.
- Keep strict single-writer ownership: each table/row has exactly one actor writer.
- Parent actors (`workspace`, `project`, `handoff`, `history`, `sandbox-instance`) use command-only loops with no timeout.
- Parent actors (`workspace`, `project`, `task`, `history`, `sandbox-instance`) use command-only loops with no timeout.
- Periodic syncing lives in dedicated child actors with one timeout cadence each.
- Actor handle policy:
- Prefer explicit `get` or explicit `create` based on workflow intent; do not default to `getOrCreate`.
@ -152,13 +148,13 @@ For all Rivet/RivetKit implementation:
- Use create semantics only on explicit provisioning/create paths where creating a new actor instance is intended.
- `getOrCreate` is a last resort for create paths when an explicit create API is unavailable; never use it in read/command paths.
- For long-lived cross-actor links (for example sandbox/session runtime access), persist actor identity (`actorId`) and keep a fallback lookup path by actor id.
- Docker dev: `compose.dev.yaml` mounts a named volume at `/root/.local/share/openhandoff/repos` to persist backend-managed git clones across restarts. Code must still work if this volume is not present (create directories as needed).
- Docker dev: `compose.dev.yaml` mounts a named volume at `/root/.local/share/foundry/repos` to persist backend-managed git clones across restarts. Code must still work if this volume is not present (create directories as needed).
- RivetKit actor `c.state` is durable, but in Docker it is stored under `/root/.local/share/rivetkit`. If that path is not persisted, actor state-derived indexes (for example, in `project` actor state) can be lost after container recreation even when other data still exists.
- Workflow history divergence policy:
- Production: never auto-delete actor state to resolve `HistoryDivergedError`; ship explicit workflow migrations (`ctx.removed(...)`, step compatibility).
- Development: manual local state reset is allowed as an operator recovery path when migrations are not yet available.
- Storage rule of thumb:
- Put simple metadata in `c.state` (KV state): small scalars and identifiers like `{ handoffId }`, `{ repoId }`, booleans, counters, timestamps, status strings.
- Put simple metadata in `c.state` (KV state): small scalars and identifiers like `{ taskId }`, `{ repoId }`, booleans, counters, timestamps, status strings.
- If it grows beyond trivial (arrays, maps, histories, query/filter needs, relational consistency), use SQLite + Drizzle in `c.db`.
## Testing Policy
@ -168,7 +164,6 @@ For all Rivet/RivetKit implementation:
- Integration tests use `setupTest()` from `rivetkit/test` and are gated behind `HF_ENABLE_ACTOR_INTEGRATION_TESTS=1`.
- End-to-end testing must run against the dev backend started via `docker compose -f compose.dev.yaml up` (host -> container). Do not run E2E against an in-process test runtime.
- E2E tests should talk to the backend over HTTP (default `http://127.0.0.1:7741/api/rivet`) and use real GitHub repos/PRs.
- Current org test repo: `rivet-dev/sandbox-agent-testing` (`https://github.com/rivet-dev/sandbox-agent-testing`).
- Secrets (e.g. `OPENAI_API_KEY`, `GITHUB_TOKEN`/`GH_TOKEN`) must be provided via environment variables, never hardcoded in the repo.
- Treat client E2E tests in `packages/client/test` as the primary end-to-end source of truth for product behavior.
- Keep backend tests small and targeted. Only retain backend-only tests for invariants or persistence rules that are not well-covered through client E2E.
@ -176,7 +171,7 @@ For all Rivet/RivetKit implementation:
## Config
- Keep config path at `~/.config/openhandoff/config.toml`.
- Keep config path at `~/.config/foundry/config.toml`.
- Evolve properties in place; do not move config location.
## Project Guidance

View file

@ -5,8 +5,8 @@
1. Clone:
```bash
git clone https://github.com/rivet-dev/openhandoff.git
cd openhandoff
git clone https://github.com/rivet-dev/sandbox-agent.git
cd sandbox-agent/foundry
```
2. Install dependencies:
@ -35,7 +35,7 @@ Build local RivetKit before backend changes that depend on Rivet internals:
cd ../rivet
pnpm build -F rivetkit
cd /path/to/openhandoff
cd /path/to/sandbox-agent/foundry
just sync-rivetkit
```
@ -54,11 +54,11 @@ pnpm -w test
Start the dev backend (hot reload via `bun --watch`) and Vite frontend via Docker Compose:
```bash
just factory-dev
just foundry-dev
```
Stop it:
```bash
just factory-dev-down
just foundry-dev-down
```

View file

@ -22,19 +22,19 @@ COPY packages/rivetkit-vendor/sqlite-vfs-win32-x64/package.json packages/rivetki
COPY packages/rivetkit-vendor/runner/package.json packages/rivetkit-vendor/runner/package.json
COPY packages/rivetkit-vendor/runner-protocol/package.json packages/rivetkit-vendor/runner-protocol/package.json
COPY packages/rivetkit-vendor/virtual-websocket/package.json packages/rivetkit-vendor/virtual-websocket/package.json
RUN pnpm fetch --frozen-lockfile --filter @openhandoff/backend...
RUN pnpm fetch --frozen-lockfile --filter @sandbox-agent/foundry-backend...
FROM base AS build
COPY --from=deps /pnpm/store /pnpm/store
COPY . .
RUN pnpm install --frozen-lockfile --prefer-offline --filter @openhandoff/backend...
RUN pnpm --filter @openhandoff/shared build
RUN pnpm --filter @openhandoff/backend build
RUN pnpm --filter @openhandoff/backend deploy --prod --legacy /out
RUN pnpm install --frozen-lockfile --prefer-offline --filter @sandbox-agent/foundry-backend...
RUN pnpm --filter @sandbox-agent/foundry-shared build
RUN pnpm --filter @sandbox-agent/foundry-backend build
RUN pnpm --filter @sandbox-agent/foundry-backend deploy --prod --legacy /out
FROM oven/bun:1.2 AS runtime
ENV NODE_ENV=production
ENV HOME=/home/handoff
ENV HOME=/home/task
WORKDIR /app
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
@ -43,11 +43,11 @@ RUN apt-get update \
gh \
openssh-client \
&& rm -rf /var/lib/apt/lists/*
RUN addgroup --system --gid 1001 handoff \
&& adduser --system --uid 1001 --home /home/handoff --ingroup handoff handoff \
&& mkdir -p /home/handoff \
&& chown -R handoff:handoff /home/handoff /app
RUN addgroup --system --gid 1001 task \
&& adduser --system --uid 1001 --home /home/task --ingroup task task \
&& mkdir -p /home/task \
&& chown -R task:task /home/task /app
COPY --from=build /out ./
USER handoff
USER task
EXPOSE 7741
CMD ["bun", "dist/index.js", "start", "--host", "0.0.0.0"]

View file

@ -1,8 +1,8 @@
# OpenHandoff
# Foundry
TypeScript workspace handoff system powered by RivetKit actors, SQLite/Drizzle state, and OpenTUI.
TypeScript workspace task system powered by RivetKit actors, SQLite/Drizzle state, and OpenTUI.
**Documentation**: [openhandoff.dev](https://openhandoff.dev)
**Documentation**: see `../docs/` in the repository root
## Quick Install

90
foundry/compose.dev.yaml Normal file
View file

@ -0,0 +1,90 @@
name: foundry
services:
backend:
build:
context: ..
dockerfile: foundry/docker/backend.dev.Dockerfile
image: foundry-backend-dev
working_dir: /app
environment:
HF_BACKEND_HOST: "0.0.0.0"
HF_BACKEND_PORT: "7741"
HF_RIVET_MANAGER_PORT: "8750"
RIVETKIT_STORAGE_PATH: "/root/.local/share/foundry/rivetkit"
# Pass through credentials needed for agent execution + PR creation in dev/e2e.
# Do not hardcode secrets; set these in your environment when starting compose.
ANTHROPIC_API_KEY: "${ANTHROPIC_API_KEY:-}"
CLAUDE_API_KEY: "${CLAUDE_API_KEY:-${ANTHROPIC_API_KEY:-}}"
OPENAI_API_KEY: "${OPENAI_API_KEY:-}"
# sandbox-agent codex plugin currently expects CODEX_API_KEY. Map from OPENAI_API_KEY for convenience.
CODEX_API_KEY: "${CODEX_API_KEY:-${OPENAI_API_KEY:-}}"
# Support either GITHUB_TOKEN or GITHUB_PAT in local env files.
GITHUB_TOKEN: "${GITHUB_TOKEN:-${GITHUB_PAT:-}}"
GH_TOKEN: "${GH_TOKEN:-${GITHUB_TOKEN:-${GITHUB_PAT:-}}}"
DAYTONA_ENDPOINT: "${DAYTONA_ENDPOINT:-}"
DAYTONA_API_KEY: "${DAYTONA_API_KEY:-}"
HF_DAYTONA_ENDPOINT: "${HF_DAYTONA_ENDPOINT:-}"
HF_DAYTONA_API_KEY: "${HF_DAYTONA_API_KEY:-}"
ports:
- "7741:7741"
# RivetKit manager (used by browser clients after /api/rivet metadata redirect in dev)
- "8750:8750"
volumes:
- "..:/app"
# The linked RivetKit checkout resolves from Foundry packages to /task/rivet-checkout in-container.
- "../../../task/rivet-checkout:/task/rivet-checkout:ro"
# Reuse the host Codex auth profile for local sandbox-agent Codex sessions in dev.
- "${HOME}/.codex:/root/.codex"
# Keep backend dependency installs Linux-native instead of using host node_modules.
- "foundry_backend_root_node_modules:/app/node_modules"
- "foundry_backend_backend_node_modules:/app/foundry/packages/backend/node_modules"
- "foundry_backend_shared_node_modules:/app/foundry/packages/shared/node_modules"
- "foundry_backend_persist_rivet_node_modules:/app/sdks/persist-rivet/node_modules"
- "foundry_backend_typescript_node_modules:/app/sdks/typescript/node_modules"
- "foundry_backend_pnpm_store:/root/.local/share/pnpm/store"
# Persist backend-managed local git clones across container restarts.
- "foundry_git_repos:/root/.local/share/foundry/repos"
# Persist RivetKit local storage across container restarts.
- "foundry_rivetkit_storage:/root/.local/share/foundry/rivetkit"
frontend:
build:
context: ..
dockerfile: foundry/docker/frontend.dev.Dockerfile
working_dir: /app
depends_on:
- backend
environment:
HOME: "/tmp"
HF_BACKEND_HTTP: "http://backend:7741"
ports:
- "4173:4173"
volumes:
- "..:/app"
# Ensure logs in .foundry/ persist on the host even if we change source mounts later.
- "./.foundry:/app/foundry/.foundry"
- "../../../task/rivet-checkout:/task/rivet-checkout:ro"
# Use Linux-native workspace dependencies inside the container instead of host node_modules.
- "foundry_node_modules:/app/node_modules"
- "foundry_client_node_modules:/app/foundry/packages/client/node_modules"
- "foundry_frontend_errors_node_modules:/app/foundry/packages/frontend-errors/node_modules"
- "foundry_frontend_node_modules:/app/foundry/packages/frontend/node_modules"
- "foundry_shared_node_modules:/app/foundry/packages/shared/node_modules"
- "foundry_pnpm_store:/tmp/.local/share/pnpm/store"
volumes:
foundry_backend_root_node_modules: {}
foundry_backend_backend_node_modules: {}
foundry_backend_shared_node_modules: {}
foundry_backend_persist_rivet_node_modules: {}
foundry_backend_typescript_node_modules: {}
foundry_backend_pnpm_store: {}
foundry_git_repos: {}
foundry_rivetkit_storage: {}
foundry_node_modules: {}
foundry_client_node_modules: {}
foundry_frontend_errors_node_modules: {}
foundry_frontend_node_modules: {}
foundry_shared_node_modules: {}
foundry_pnpm_store: {}

View file

@ -1,16 +1,16 @@
name: openhandoff-preview
name: foundry-preview
services:
backend:
build:
context: ..
dockerfile: quebec/docker/backend.preview.Dockerfile
image: openhandoff-backend-preview
dockerfile: foundry/docker/backend.preview.Dockerfile
image: foundry-backend-preview
environment:
HF_BACKEND_HOST: "0.0.0.0"
HF_BACKEND_PORT: "7841"
HF_RIVET_MANAGER_PORT: "8850"
RIVETKIT_STORAGE_PATH: "/root/.local/share/openhandoff/rivetkit"
RIVETKIT_STORAGE_PATH: "/root/.local/share/foundry/rivetkit"
ANTHROPIC_API_KEY: "${ANTHROPIC_API_KEY:-}"
CLAUDE_API_KEY: "${CLAUDE_API_KEY:-${ANTHROPIC_API_KEY:-}}"
OPENAI_API_KEY: "${OPENAI_API_KEY:-}"
@ -26,19 +26,19 @@ services:
- "8850:8850"
volumes:
- "${HOME}/.codex:/root/.codex"
- "openhandoff_preview_git_repos:/root/.local/share/openhandoff/repos"
- "openhandoff_preview_rivetkit_storage:/root/.local/share/openhandoff/rivetkit"
- "foundry_preview_git_repos:/root/.local/share/foundry/repos"
- "foundry_preview_rivetkit_storage:/root/.local/share/foundry/rivetkit"
frontend:
build:
context: ..
dockerfile: quebec/docker/frontend.preview.Dockerfile
image: openhandoff-frontend-preview
dockerfile: foundry/docker/frontend.preview.Dockerfile
image: foundry-frontend-preview
depends_on:
- backend
ports:
- "4273:4273"
volumes:
openhandoff_preview_git_repos: {}
openhandoff_preview_rivetkit_storage: {}
foundry_preview_git_repos: {}
foundry_preview_rivetkit_storage: {}

View file

@ -39,4 +39,4 @@ ENV SANDBOX_AGENT_BIN="/root/.local/bin/sandbox-agent"
WORKDIR /app
CMD ["bash", "-lc", "git config --global --add safe.directory /app >/dev/null 2>&1 || true; pnpm install --force --frozen-lockfile --filter @openhandoff/backend... && exec bun factory/packages/backend/src/index.ts start --host 0.0.0.0 --port 7741"]
CMD ["bash", "-lc", "git config --global --add safe.directory /app >/dev/null 2>&1 || true; pnpm install --force --frozen-lockfile --filter @sandbox-agent/foundry-backend... && exec bun foundry/packages/backend/src/index.ts start --host 0.0.0.0 --port 7741"]

View file

@ -42,8 +42,8 @@ COPY quebec /workspace/quebec
COPY rivet-checkout /workspace/rivet-checkout
RUN pnpm install --frozen-lockfile
RUN pnpm --filter @openhandoff/shared build
RUN pnpm --filter @openhandoff/client build
RUN pnpm --filter @openhandoff/backend build
RUN pnpm --filter @sandbox-agent/foundry-shared build
RUN pnpm --filter @sandbox-agent/foundry-client build
RUN pnpm --filter @sandbox-agent/foundry-backend build
CMD ["bash", "-lc", "git config --global --add safe.directory /workspace/quebec >/dev/null 2>&1 || true; exec bun packages/backend/dist/index.js start --host 0.0.0.0 --port 7841"]

View file

@ -8,4 +8,4 @@ RUN npm install -g pnpm@10.28.2
WORKDIR /app
CMD ["bash", "-lc", "pnpm install --force --frozen-lockfile --filter @openhandoff/frontend... && cd factory/packages/frontend && exec pnpm vite --host 0.0.0.0 --port 4173"]
CMD ["bash", "-lc", "pnpm install --force --frozen-lockfile --filter @sandbox-agent/foundry-frontend... && cd foundry/packages/frontend && exec pnpm vite --host 0.0.0.0 --port 4173"]

View file

@ -10,10 +10,10 @@ COPY quebec /workspace/quebec
COPY rivet-checkout /workspace/rivet-checkout
RUN pnpm install --frozen-lockfile
RUN pnpm --filter @openhandoff/shared build
RUN pnpm --filter @openhandoff/client build
RUN pnpm --filter @openhandoff/frontend-errors build
RUN pnpm --filter @openhandoff/frontend build
RUN pnpm --filter @sandbox-agent/foundry-shared build
RUN pnpm --filter @sandbox-agent/foundry-client build
RUN pnpm --filter @sandbox-agent/foundry-frontend-errors build
RUN pnpm --filter @sandbox-agent/foundry-frontend build
FROM nginx:1.27-alpine

View file

@ -1,4 +1,4 @@
# Factory Cloud
# Foundry Cloud
## Mock Server
@ -8,5 +8,5 @@ A detached `tmux` session is acceptable for this. Example:
```bash
tmux new-session -d -s mock-ui-4180 \
'cd /Users/nathan/conductor/workspaces/sandbox-agent/provo && OPENHANDOFF_FRONTEND_CLIENT_MODE=mock pnpm --filter @openhandoff/frontend exec vite --host localhost --port 4180'
'cd /Users/nathan/conductor/workspaces/sandbox-agent/provo && FOUNDRY_FRONTEND_CLIENT_MODE=mock pnpm --filter @sandbox-agent/foundry-frontend exec vite --host localhost --port 4180'
```

View file

@ -7,7 +7,7 @@
### claude code/opencode
1. "handoff this task to do xxxx"
1. "task this task to do xxxx"
2. ask clarifying questions
3. works in background (attach opencode session with `hf attach` and switch to session with `hf switch`)
4. automatically submits draft pr (if configured)
@ -62,7 +62,7 @@
- model (for the agent)
- todo list & plan management -> with simplenote sync
- sqlite (global)
- list of all global handoff repos
- list of all global task repos
- heartbeat status to tell openclaw what it needs to send you
- sandbox agent sdk support
- serve command to run server
@ -78,5 +78,5 @@
- automatically uses your opencode theme
- auto symlink target/node_modules/etc
- auto-archives handoffs when closed
- auto-archives tasks when closed
- shows agent status in the tmux window name

View file

@ -10,10 +10,10 @@ WorkspaceActor
├─ ProjectActor(repo)
│ ├─ ProjectBranchSyncActor
│ ├─ ProjectPrSyncActor
│ └─ HandoffActor(handoff)
│ ├─ HandoffSessionActor(session) × N
│ └─ TaskActor(task)
│ ├─ TaskSessionActor(session) × N
│ │ └─ SessionStatusSyncActor(session) × 0..1
│ └─ Handoff-local workbench state
│ └─ Task-local workbench state
└─ SandboxInstanceActor(providerId, sandboxId) × N
```
@ -22,12 +22,12 @@ WorkspaceActor
- `WorkspaceActor` is the workspace coordinator and lookup/index owner.
- `HistoryActor` is workspace-scoped. There is one workspace-level history feed.
- `ProjectActor` is the repo coordinator and owns repo-local caches/indexes.
- `HandoffActor` is one branch. Treat `1 handoff = 1 branch` once branch assignment is finalized.
- `HandoffActor` can have many sessions.
- `HandoffActor` can reference many sandbox instances historically, but should have only one active sandbox/session at a time.
- `TaskActor` is one branch. Treat `1 task = 1 branch` once branch assignment is finalized.
- `TaskActor` can have many sessions.
- `TaskActor` can reference many sandbox instances historically, but should have only one active sandbox/session at a time.
- Session unread state and draft prompts are backend-owned workbench state, not frontend-local state.
- Branch rename is a real git operation, not just metadata.
- `SandboxInstanceActor` stays separate from `HandoffActor`; handoffs/sessions reference it by identity.
- `SandboxInstanceActor` stays separate from `TaskActor`; tasks/sessions reference it by identity.
- Sync actors are polling workers only. They feed parent actors and should not become the source of truth.
## Maintenance

View file

@ -1,12 +1,12 @@
{
"name": "@openhandoff/backend",
"name": "@sandbox-agent/foundry-backend",
"version": "0.1.0",
"private": true,
"type": "module",
"main": "dist/index.js",
"types": "dist/index.d.ts",
"scripts": {
"build": "tsup src/index.ts --format esm --external bun:sqlite",
"build": "tsup src/index.ts --format esm",
"db:generate": "find src/actors -name drizzle.config.ts -exec pnpm exec drizzle-kit generate --config {} \\; && \"$HOME/.bun/bin/bun\" src/actors/_scripts/generate-actor-migrations.ts",
"typecheck": "tsc --noEmit",
"test": "$HOME/.bun/bin/bun x vitest run",
@ -17,7 +17,7 @@
"@hono/node-server": "^1.19.7",
"@hono/node-ws": "^1.3.0",
"@iarna/toml": "^2.2.5",
"@openhandoff/shared": "workspace:*",
"@sandbox-agent/foundry-shared": "workspace:*",
"@sandbox-agent/persist-rivet": "workspace:*",
"drizzle-orm": "^0.44.5",
"hono": "^4.11.9",

View file

@ -1,18 +1,27 @@
import type { AppConfig } from "@openhandoff/shared";
import type { AppConfig } from "@sandbox-agent/foundry-shared";
import type { BackendDriver } from "../driver.js";
import type { NotificationService } from "../notifications/index.js";
import type { ProviderRegistry } from "../providers/index.js";
import type { AppShellServices } from "../services/app-shell-runtime.js";
let runtimeConfig: AppConfig | null = null;
let providerRegistry: ProviderRegistry | null = null;
let notificationService: NotificationService | null = null;
let runtimeDriver: BackendDriver | null = null;
let appShellServices: AppShellServices | null = null;
export function initActorRuntimeContext(config: AppConfig, providers: ProviderRegistry, notifications?: NotificationService, driver?: BackendDriver): void {
export function initActorRuntimeContext(
config: AppConfig,
providers: ProviderRegistry,
notifications?: NotificationService,
driver?: BackendDriver,
appShell?: AppShellServices,
): void {
runtimeConfig = config;
providerRegistry = providers;
notificationService = notifications ?? null;
runtimeDriver = driver ?? null;
appShellServices = appShell ?? null;
}
export function getActorRuntimeContext(): {
@ -20,6 +29,7 @@ export function getActorRuntimeContext(): {
providers: ProviderRegistry;
notifications: NotificationService | null;
driver: BackendDriver;
appShell: AppShellServices;
} {
if (!runtimeConfig || !providerRegistry) {
throw new Error("Actor runtime context not initialized");
@ -29,10 +39,15 @@ export function getActorRuntimeContext(): {
throw new Error("Actor runtime context missing driver");
}
if (!appShellServices) {
throw new Error("Actor runtime context missing app shell services");
}
return {
config: runtimeConfig,
providers: providerRegistry,
notifications: notificationService,
driver: runtimeDriver,
appShell: appShellServices,
};
}

View file

@ -1,19 +1,19 @@
import type { HandoffStatus, ProviderId } from "@openhandoff/shared";
import type { TaskStatus, ProviderId } from "@sandbox-agent/foundry-shared";
export interface HandoffCreatedEvent {
export interface TaskCreatedEvent {
workspaceId: string;
repoId: string;
handoffId: string;
taskId: string;
providerId: ProviderId;
branchName: string;
title: string;
}
export interface HandoffStatusEvent {
export interface TaskStatusEvent {
workspaceId: string;
repoId: string;
handoffId: string;
status: HandoffStatus;
taskId: string;
status: TaskStatus;
message: string;
}
@ -26,28 +26,28 @@ export interface ProjectSnapshotEvent {
export interface AgentStartedEvent {
workspaceId: string;
repoId: string;
handoffId: string;
taskId: string;
sessionId: string;
}
export interface AgentIdleEvent {
workspaceId: string;
repoId: string;
handoffId: string;
taskId: string;
sessionId: string;
}
export interface AgentErrorEvent {
workspaceId: string;
repoId: string;
handoffId: string;
taskId: string;
message: string;
}
export interface PrCreatedEvent {
workspaceId: string;
repoId: string;
handoffId: string;
taskId: string;
prNumber: number;
url: string;
}
@ -55,7 +55,7 @@ export interface PrCreatedEvent {
export interface PrClosedEvent {
workspaceId: string;
repoId: string;
handoffId: string;
taskId: string;
prNumber: number;
merged: boolean;
}
@ -63,7 +63,7 @@ export interface PrClosedEvent {
export interface PrReviewEvent {
workspaceId: string;
repoId: string;
handoffId: string;
taskId: string;
prNumber: number;
reviewer: string;
status: string;
@ -72,41 +72,41 @@ export interface PrReviewEvent {
export interface CiStatusChangedEvent {
workspaceId: string;
repoId: string;
handoffId: string;
taskId: string;
prNumber: number;
status: string;
}
export type HandoffStepName = "auto_commit" | "push" | "pr_submit";
export type HandoffStepStatus = "started" | "completed" | "skipped" | "failed";
export type TaskStepName = "auto_commit" | "push" | "pr_submit";
export type TaskStepStatus = "started" | "completed" | "skipped" | "failed";
export interface HandoffStepEvent {
export interface TaskStepEvent {
workspaceId: string;
repoId: string;
handoffId: string;
step: HandoffStepName;
status: HandoffStepStatus;
taskId: string;
step: TaskStepName;
status: TaskStepStatus;
message: string;
}
export interface BranchSwitchedEvent {
workspaceId: string;
repoId: string;
handoffId: string;
taskId: string;
branchName: string;
}
export interface SessionAttachedEvent {
workspaceId: string;
repoId: string;
handoffId: string;
taskId: string;
sessionId: string;
}
export interface BranchSyncedEvent {
workspaceId: string;
repoId: string;
handoffId: string;
taskId: string;
branchName: string;
strategy: string;
}

View file

@ -1,5 +1,5 @@
import { handoffKey, handoffStatusSyncKey, historyKey, projectBranchSyncKey, projectKey, projectPrSyncKey, sandboxInstanceKey, workspaceKey } from "./keys.js";
import type { ProviderId } from "@openhandoff/shared";
import { taskKey, taskStatusSyncKey, historyKey, projectBranchSyncKey, projectKey, projectPrSyncKey, sandboxInstanceKey, workspaceKey } from "./keys.js";
import type { ProviderId } from "@sandbox-agent/foundry-shared";
export function actorClient(c: any) {
return c.client();
@ -25,12 +25,12 @@ export function getProject(c: any, workspaceId: string, repoId: string) {
return actorClient(c).project.get(projectKey(workspaceId, repoId));
}
export function getHandoff(c: any, workspaceId: string, repoId: string, handoffId: string) {
return actorClient(c).handoff.get(handoffKey(workspaceId, repoId, handoffId));
export function getTask(c: any, workspaceId: string, repoId: string, taskId: string) {
return actorClient(c).task.get(taskKey(workspaceId, repoId, taskId));
}
export async function getOrCreateHandoff(c: any, workspaceId: string, repoId: string, handoffId: string, createWithInput: Record<string, unknown>) {
return await actorClient(c).handoff.getOrCreate(handoffKey(workspaceId, repoId, handoffId), {
export async function getOrCreateTask(c: any, workspaceId: string, repoId: string, taskId: string, createWithInput: Record<string, unknown>) {
return await actorClient(c).task.getOrCreate(taskKey(workspaceId, repoId, taskId), {
createWithInput,
});
}
@ -80,16 +80,16 @@ export async function getOrCreateSandboxInstance(
return await actorClient(c).sandboxInstance.getOrCreate(sandboxInstanceKey(workspaceId, providerId, sandboxId), { createWithInput });
}
export async function getOrCreateHandoffStatusSync(
export async function getOrCreateTaskStatusSync(
c: any,
workspaceId: string,
repoId: string,
handoffId: string,
taskId: string,
sandboxId: string,
sessionId: string,
createWithInput: Record<string, unknown>,
) {
return await actorClient(c).handoffStatusSync.getOrCreate(handoffStatusSyncKey(workspaceId, repoId, handoffId, sandboxId, sessionId), {
return await actorClient(c).taskStatusSync.getOrCreate(taskStatusSyncKey(workspaceId, repoId, taskId, sandboxId, sessionId), {
createWithInput,
});
}
@ -102,16 +102,16 @@ export function selfProjectBranchSync(c: any) {
return actorClient(c).projectBranchSync.getForId(c.actorId);
}
export function selfHandoffStatusSync(c: any) {
return actorClient(c).handoffStatusSync.getForId(c.actorId);
export function selfTaskStatusSync(c: any) {
return actorClient(c).taskStatusSync.getForId(c.actorId);
}
export function selfHistory(c: any) {
return actorClient(c).history.getForId(c.actorId);
}
export function selfHandoff(c: any) {
return actorClient(c).handoff.getForId(c.actorId);
export function selfTask(c: any) {
return actorClient(c).task.getForId(c.actorId);
}
export function selfWorkspace(c: any) {

View file

@ -0,0 +1,5 @@
import { db } from "rivetkit/db/drizzle";
import * as schema from "./schema.js";
import migrations from "./migrations.js";
export const historyDb = db({ schema, migrations });

View file

@ -1,6 +1,6 @@
CREATE TABLE `events` (
`id` integer PRIMARY KEY AUTOINCREMENT NOT NULL,
`handoff_id` text,
`task_id` text,
`branch_name` text,
`kind` text NOT NULL,
`payload_json` text NOT NULL,

View file

@ -14,8 +14,8 @@
"notNull": true,
"autoincrement": true
},
"handoff_id": {
"name": "handoff_id",
"task_id": {
"name": "task_id",
"type": "text",
"primaryKey": false,
"notNull": false,

View file

@ -18,7 +18,7 @@ export default {
migrations: {
m0000: `CREATE TABLE \`events\` (
\`id\` integer PRIMARY KEY AUTOINCREMENT NOT NULL,
\`handoff_id\` text,
\`task_id\` text,
\`branch_name\` text,
\`kind\` text NOT NULL,
\`payload_json\` text NOT NULL,

View file

@ -2,7 +2,7 @@ import { integer, sqliteTable, text } from "rivetkit/db/drizzle";
export const events = sqliteTable("events", {
id: integer("id").primaryKey({ autoIncrement: true }),
handoffId: text("handoff_id"),
taskId: text("task_id"),
branchName: text("branch_name"),
kind: text("kind").notNull(),
payloadJson: text("payload_json").notNull(),

View file

@ -2,7 +2,7 @@
import { and, desc, eq } from "drizzle-orm";
import { actor, queue } from "rivetkit";
import { Loop, workflow } from "rivetkit/workflow";
import type { HistoryEvent } from "@openhandoff/shared";
import type { HistoryEvent } from "@sandbox-agent/foundry-shared";
import { selfHistory } from "../handles.js";
import { historyDb } from "./db/db.js";
import { events } from "./db/schema.js";
@ -14,14 +14,14 @@ export interface HistoryInput {
export interface AppendHistoryCommand {
kind: string;
handoffId?: string;
taskId?: string;
branchName?: string;
payload: Record<string, unknown>;
}
export interface ListHistoryParams {
branch?: string;
handoffId?: string;
taskId?: string;
limit?: number;
}
@ -32,7 +32,7 @@ async function appendHistoryRow(loopCtx: any, body: AppendHistoryCommand): Promi
await loopCtx.db
.insert(events)
.values({
handoffId: body.handoffId ?? null,
taskId: body.taskId ?? null,
branchName: body.branchName ?? null,
kind: body.kind,
payloadJson: JSON.stringify(body.payload),
@ -77,8 +77,8 @@ export const history = actor({
async list(c, params?: ListHistoryParams): Promise<HistoryEvent[]> {
const whereParts = [];
if (params?.handoffId) {
whereParts.push(eq(events.handoffId, params.handoffId));
if (params?.taskId) {
whereParts.push(eq(events.taskId, params.taskId));
}
if (params?.branch) {
whereParts.push(eq(events.branchName, params.branch));
@ -87,7 +87,7 @@ export const history = actor({
const base = c.db
.select({
id: events.id,
handoffId: events.handoffId,
taskId: events.taskId,
branchName: events.branchName,
kind: events.kind,
payloadJson: events.payloadJson,

View file

@ -1,6 +1,6 @@
import { setup } from "rivetkit";
import { handoffStatusSync } from "./handoff-status-sync/index.js";
import { handoff } from "./handoff/index.js";
import { taskStatusSync } from "./task-status-sync/index.js";
import { task } from "./task/index.js";
import { history } from "./history/index.js";
import { projectBranchSync } from "./project-branch-sync/index.js";
import { projectPrSync } from "./project-pr-sync/index.js";
@ -8,7 +8,7 @@ import { project } from "./project/index.js";
import { sandboxInstance } from "./sandbox-instance/index.js";
import { workspace } from "./workspace/index.js";
function resolveManagerPort(): number {
export function resolveManagerPort(): number {
const raw = process.env.HF_RIVET_MANAGER_PORT ?? process.env.RIVETKIT_MANAGER_PORT;
if (!raw) {
return 7750;
@ -30,12 +30,12 @@ export const registry = setup({
use: {
workspace,
project,
handoff,
task,
sandboxInstance,
history,
projectPrSync,
projectBranchSync,
handoffStatusSync,
taskStatusSync,
},
managerPort: resolveManagerPort(),
managerHost: resolveManagerHost(),
@ -43,8 +43,8 @@ export const registry = setup({
export * from "./context.js";
export * from "./events.js";
export * from "./handoff-status-sync/index.js";
export * from "./handoff/index.js";
export * from "./task-status-sync/index.js";
export * from "./task/index.js";
export * from "./history/index.js";
export * from "./keys.js";
export * from "./project-branch-sync/index.js";

View file

@ -8,8 +8,8 @@ export function projectKey(workspaceId: string, repoId: string): ActorKey {
return ["ws", workspaceId, "project", repoId];
}
export function handoffKey(workspaceId: string, repoId: string, handoffId: string): ActorKey {
return ["ws", workspaceId, "project", repoId, "handoff", handoffId];
export function taskKey(workspaceId: string, repoId: string, taskId: string): ActorKey {
return ["ws", workspaceId, "project", repoId, "task", taskId];
}
export function sandboxInstanceKey(workspaceId: string, providerId: string, sandboxId: string): ActorKey {
@ -28,7 +28,7 @@ export function projectBranchSyncKey(workspaceId: string, repoId: string): Actor
return ["ws", workspaceId, "project", repoId, "branch-sync"];
}
export function handoffStatusSyncKey(workspaceId: string, repoId: string, handoffId: string, sandboxId: string, sessionId: string): ActorKey {
// Include sandbox + session so multiple sandboxes/sessions can be tracked per handoff.
return ["ws", workspaceId, "project", repoId, "handoff", handoffId, "status-sync", sandboxId, sessionId];
export function taskStatusSyncKey(workspaceId: string, repoId: string, taskId: string, sandboxId: string, sessionId: string): ActorKey {
// Include sandbox + session so multiple sandboxes/sessions can be tracked per task.
return ["ws", workspaceId, "project", repoId, "task", taskId, "status-sync", sandboxId, sessionId];
}

View file

@ -23,5 +23,5 @@ export function logActorWarning(scope: string, message: string, context?: Record
...(context ?? {}),
};
// eslint-disable-next-line no-console
console.warn("[openhandoff][actor:warn]", payload);
console.warn("[foundry][actor:warn]", payload);
}

View file

@ -2,14 +2,14 @@
import { randomUUID } from "node:crypto";
import { and, desc, eq, isNotNull, ne } from "drizzle-orm";
import { Loop } from "rivetkit/workflow";
import type { AgentType, HandoffRecord, HandoffSummary, ProviderId, RepoOverview, RepoStackAction, RepoStackActionResult } from "@openhandoff/shared";
import type { AgentType, TaskRecord, TaskSummary, ProviderId, RepoOverview, RepoStackAction, RepoStackActionResult } from "@sandbox-agent/foundry-shared";
import { getActorRuntimeContext } from "../context.js";
import { getHandoff, getOrCreateHandoff, getOrCreateHistory, getOrCreateProjectBranchSync, getOrCreateProjectPrSync, selfProject } from "../handles.js";
import { getTask, getOrCreateTask, getOrCreateHistory, getOrCreateProjectBranchSync, getOrCreateProjectPrSync, selfProject } from "../handles.js";
import { isActorNotFoundError, logActorWarning, resolveErrorMessage } from "../logging.js";
import { openhandoffRepoClonePath } from "../../services/openhandoff-paths.js";
import { foundryRepoClonePath } from "../../services/foundry-paths.js";
import { expectQueueResponse } from "../../services/queue.js";
import { withRepoGitLock } from "../../services/repo-git-lock.js";
import { branches, handoffIndex, prCache, repoMeta } from "./db/schema.js";
import { branches, taskIndex, prCache, repoMeta } from "./db/schema.js";
import { deriveFallbackTitle } from "../../services/create-flow.js";
import { normalizeBaseBranchName } from "../../integrations/git-spice/index.js";
import { sortBranchesForOverview } from "./stack-model.js";
@ -22,7 +22,7 @@ interface EnsureProjectResult {
localPath: string;
}
interface CreateHandoffCommand {
interface CreateTaskCommand {
task: string;
providerId: ProviderId;
agentType: AgentType | null;
@ -32,22 +32,22 @@ interface CreateHandoffCommand {
onBranch: string | null;
}
interface HydrateHandoffIndexCommand {}
interface HydrateTaskIndexCommand {}
interface ListReservedBranchesCommand {}
interface RegisterHandoffBranchCommand {
handoffId: string;
interface RegisterTaskBranchCommand {
taskId: string;
branchName: string;
requireExistingRemote?: boolean;
}
interface ListHandoffSummariesCommand {
interface ListTaskSummariesCommand {
includeArchived?: boolean;
}
interface GetHandoffEnrichedCommand {
handoffId: string;
interface GetTaskEnrichedCommand {
taskId: string;
}
interface GetPullRequestForBranchCommand {
@ -93,9 +93,9 @@ interface RunRepoStackActionCommand {
const PROJECT_QUEUE_NAMES = [
"project.command.ensure",
"project.command.hydrateHandoffIndex",
"project.command.createHandoff",
"project.command.registerHandoffBranch",
"project.command.hydrateTaskIndex",
"project.command.createTask",
"project.command.registerTaskBranch",
"project.command.runRepoStackAction",
"project.command.applyPrSyncResult",
"project.command.applyBranchSyncResult",
@ -111,7 +111,7 @@ export function projectWorkflowQueueName(name: ProjectQueueName): ProjectQueueNa
async function ensureLocalClone(c: any, remoteUrl: string): Promise<string> {
const { config, driver } = getActorRuntimeContext();
const localPath = openhandoffRepoClonePath(config, c.state.workspaceId, c.state.repoId);
const localPath = foundryRepoClonePath(config, c.state.workspaceId, c.state.repoId);
await driver.git.ensureCloned(remoteUrl, localPath);
c.state.localPath = localPath;
return localPath;
@ -131,59 +131,59 @@ async function ensureProjectSyncActors(c: any, localPath: string): Promise<void>
c.state.syncActorsStarted = true;
}
async function deleteStaleHandoffIndexRow(c: any, handoffId: string): Promise<void> {
async function deleteStaleTaskIndexRow(c: any, taskId: string): Promise<void> {
try {
await c.db.delete(handoffIndex).where(eq(handoffIndex.handoffId, handoffId)).run();
await c.db.delete(taskIndex).where(eq(taskIndex.taskId, taskId)).run();
} catch {
// Best-effort cleanup only; preserve the original caller flow.
}
}
function isStaleHandoffReferenceError(error: unknown): boolean {
function isStaleTaskReferenceError(error: unknown): boolean {
const message = resolveErrorMessage(error);
return isActorNotFoundError(error) || message.startsWith("Handoff not found:");
return isActorNotFoundError(error) || message.startsWith("Task not found:");
}
async function ensureHandoffIndexHydrated(c: any): Promise<void> {
if (c.state.handoffIndexHydrated) {
async function ensureTaskIndexHydrated(c: any): Promise<void> {
if (c.state.taskIndexHydrated) {
return;
}
const existing = await c.db.select({ handoffId: handoffIndex.handoffId }).from(handoffIndex).limit(1).get();
const existing = await c.db.select({ taskId: taskIndex.taskId }).from(taskIndex).limit(1).get();
if (existing) {
c.state.handoffIndexHydrated = true;
c.state.taskIndexHydrated = true;
return;
}
// Migration path for old project actors that only tracked handoffs in history.
// Migration path for old project actors that only tracked tasks in history.
try {
const history = await getOrCreateHistory(c, c.state.workspaceId, c.state.repoId);
const rows = await history.list({ limit: 5_000 });
const seen = new Set<string>();
let skippedMissingHandoffActors = 0;
let skippedMissingTaskActors = 0;
for (const row of rows) {
if (!row.handoffId || seen.has(row.handoffId)) {
if (!row.taskId || seen.has(row.taskId)) {
continue;
}
seen.add(row.handoffId);
seen.add(row.taskId);
try {
const h = getHandoff(c, c.state.workspaceId, c.state.repoId, row.handoffId);
const h = getTask(c, c.state.workspaceId, c.state.repoId, row.taskId);
await h.get();
} catch (error) {
if (isStaleHandoffReferenceError(error)) {
skippedMissingHandoffActors += 1;
if (isStaleTaskReferenceError(error)) {
skippedMissingTaskActors += 1;
continue;
}
throw error;
}
await c.db
.insert(handoffIndex)
.insert(taskIndex)
.values({
handoffId: row.handoffId,
taskId: row.taskId,
branchName: row.branchName,
createdAt: row.createdAt,
updatedAt: row.createdAt,
@ -192,22 +192,22 @@ async function ensureHandoffIndexHydrated(c: any): Promise<void> {
.run();
}
if (skippedMissingHandoffActors > 0) {
logActorWarning("project", "skipped missing handoffs while hydrating index", {
if (skippedMissingTaskActors > 0) {
logActorWarning("project", "skipped missing tasks while hydrating index", {
workspaceId: c.state.workspaceId,
repoId: c.state.repoId,
skippedMissingHandoffActors,
skippedMissingTaskActors,
});
}
} catch (error) {
logActorWarning("project", "handoff index hydration from history failed", {
logActorWarning("project", "task index hydration from history failed", {
workspaceId: c.state.workspaceId,
repoId: c.state.repoId,
error: resolveErrorMessage(error),
});
}
c.state.handoffIndexHydrated = true;
c.state.taskIndexHydrated = true;
}
async function ensureProjectReady(c: any): Promise<string> {
@ -241,11 +241,11 @@ async function ensureProjectReadyForRead(c: any): Promise<string> {
return c.state.localPath;
}
async function ensureHandoffIndexHydratedForRead(c: any): Promise<void> {
if (c.state.handoffIndexHydrated) {
async function ensureTaskIndexHydratedForRead(c: any): Promise<void> {
if (c.state.taskIndexHydrated) {
return;
}
await projectActions.hydrateHandoffIndex(c, {});
await projectActions.hydrateTaskIndex(c, {});
}
async function forceProjectSync(c: any, localPath: string): Promise<void> {
@ -256,7 +256,7 @@ async function forceProjectSync(c: any, localPath: string): Promise<void> {
await branchSync.force();
}
async function enrichHandoffRecord(c: any, record: HandoffRecord): Promise<HandoffRecord> {
async function enrichTaskRecord(c: any, record: TaskRecord): Promise<TaskRecord> {
const branchName = record.branchName;
const br =
branchName != null
@ -325,16 +325,16 @@ async function ensureProjectMutation(c: any, cmd: EnsureProjectCommand): Promise
return { localPath };
}
async function hydrateHandoffIndexMutation(c: any, _cmd?: HydrateHandoffIndexCommand): Promise<void> {
await ensureHandoffIndexHydrated(c);
async function hydrateTaskIndexMutation(c: any, _cmd?: HydrateTaskIndexCommand): Promise<void> {
await ensureTaskIndexHydrated(c);
}
async function createHandoffMutation(c: any, cmd: CreateHandoffCommand): Promise<HandoffRecord> {
async function createTaskMutation(c: any, cmd: CreateTaskCommand): Promise<TaskRecord> {
const localPath = await ensureProjectReady(c);
const onBranch = cmd.onBranch?.trim() || null;
const initialBranchName = onBranch;
const initialTitle = onBranch ? deriveFallbackTitle(cmd.task, cmd.explicitTitle ?? undefined) : null;
const handoffId = randomUUID();
const taskId = randomUUID();
if (onBranch) {
await forceProjectSync(c, localPath);
@ -344,19 +344,19 @@ async function createHandoffMutation(c: any, cmd: CreateHandoffCommand): Promise
throw new Error(`Branch not found in repo snapshot: ${onBranch}`);
}
await registerHandoffBranchMutation(c, {
handoffId,
await registerTaskBranchMutation(c, {
taskId,
branchName: onBranch,
requireExistingRemote: true,
});
}
let handoff: Awaited<ReturnType<typeof getOrCreateHandoff>>;
let task: Awaited<ReturnType<typeof getOrCreateTask>>;
try {
handoff = await getOrCreateHandoff(c, c.state.workspaceId, c.state.repoId, handoffId, {
task = await getOrCreateTask(c, c.state.workspaceId, c.state.repoId, taskId, {
workspaceId: c.state.workspaceId,
repoId: c.state.repoId,
handoffId,
taskId,
repoRemote: c.state.remoteUrl,
repoLocalPath: localPath,
branchName: initialBranchName,
@ -371,8 +371,8 @@ async function createHandoffMutation(c: any, cmd: CreateHandoffCommand): Promise
} catch (error) {
if (onBranch) {
await c.db
.delete(handoffIndex)
.where(eq(handoffIndex.handoffId, handoffId))
.delete(taskIndex)
.where(eq(taskIndex.taskId, taskId))
.run()
.catch(() => {});
}
@ -382,9 +382,9 @@ async function createHandoffMutation(c: any, cmd: CreateHandoffCommand): Promise
if (!onBranch) {
const now = Date.now();
await c.db
.insert(handoffIndex)
.insert(taskIndex)
.values({
handoffId,
taskId,
branchName: initialBranchName,
createdAt: now,
updatedAt: now,
@ -393,12 +393,12 @@ async function createHandoffMutation(c: any, cmd: CreateHandoffCommand): Promise
.run();
}
const created = await handoff.initialize({ providerId: cmd.providerId });
const created = await task.initialize({ providerId: cmd.providerId });
const history = await getOrCreateHistory(c, c.state.workspaceId, c.state.repoId);
await history.append({
kind: "handoff.created",
handoffId,
kind: "task.created",
taskId,
payload: {
repoId: c.state.repoId,
providerId: cmd.providerId,
@ -408,7 +408,7 @@ async function createHandoffMutation(c: any, cmd: CreateHandoffCommand): Promise
return created;
}
async function registerHandoffBranchMutation(c: any, cmd: RegisterHandoffBranchCommand): Promise<{ branchName: string; headSha: string }> {
async function registerTaskBranchMutation(c: any, cmd: RegisterTaskBranchCommand): Promise<{ branchName: string; headSha: string }> {
const localPath = await ensureProjectReady(c);
const branchName = cmd.branchName.trim();
@ -417,27 +417,27 @@ async function registerHandoffBranchMutation(c: any, cmd: RegisterHandoffBranchC
throw new Error("branchName is required");
}
await ensureHandoffIndexHydrated(c);
await ensureTaskIndexHydrated(c);
const existingOwner = await c.db
.select({ handoffId: handoffIndex.handoffId })
.from(handoffIndex)
.where(and(eq(handoffIndex.branchName, branchName), ne(handoffIndex.handoffId, cmd.handoffId)))
.select({ taskId: taskIndex.taskId })
.from(taskIndex)
.where(and(eq(taskIndex.branchName, branchName), ne(taskIndex.taskId, cmd.taskId)))
.get();
if (existingOwner) {
let ownerMissing = false;
try {
const h = getHandoff(c, c.state.workspaceId, c.state.repoId, existingOwner.handoffId);
const h = getTask(c, c.state.workspaceId, c.state.repoId, existingOwner.taskId);
await h.get();
} catch (error) {
if (isStaleHandoffReferenceError(error)) {
if (isStaleTaskReferenceError(error)) {
ownerMissing = true;
await deleteStaleHandoffIndexRow(c, existingOwner.handoffId);
logActorWarning("project", "pruned stale handoff index row during branch registration", {
await deleteStaleTaskIndexRow(c, existingOwner.taskId);
logActorWarning("project", "pruned stale task index row during branch registration", {
workspaceId: c.state.workspaceId,
repoId: c.state.repoId,
handoffId: existingOwner.handoffId,
taskId: existingOwner.taskId,
branchName,
});
} else {
@ -445,7 +445,7 @@ async function registerHandoffBranchMutation(c: any, cmd: RegisterHandoffBranchC
}
}
if (!ownerMissing) {
throw new Error(`branch is already assigned to a different handoff: ${branchName}`);
throw new Error(`branch is already assigned to a different task: ${branchName}`);
}
}
@ -525,15 +525,15 @@ async function registerHandoffBranchMutation(c: any, cmd: RegisterHandoffBranchC
.run();
await c.db
.insert(handoffIndex)
.insert(taskIndex)
.values({
handoffId: cmd.handoffId,
taskId: cmd.taskId,
branchName,
createdAt: now,
updatedAt: now,
})
.onConflictDoUpdate({
target: handoffIndex.handoffId,
target: taskIndex.taskId,
set: {
branchName,
updatedAt: now,
@ -546,7 +546,7 @@ async function registerHandoffBranchMutation(c: any, cmd: RegisterHandoffBranchC
async function runRepoStackActionMutation(c: any, cmd: RunRepoStackActionCommand): Promise<RepoStackActionResult> {
const localPath = await ensureProjectReady(c);
await ensureHandoffIndexHydrated(c);
await ensureTaskIndexHydrated(c);
const { driver } = getActorRuntimeContext();
const at = Date.now();
@ -682,30 +682,30 @@ async function applyPrSyncResultMutation(c: any, body: PrSyncResult): Promise<vo
continue;
}
const row = await c.db.select({ handoffId: handoffIndex.handoffId }).from(handoffIndex).where(eq(handoffIndex.branchName, item.headRefName)).get();
const row = await c.db.select({ taskId: taskIndex.taskId }).from(taskIndex).where(eq(taskIndex.branchName, item.headRefName)).get();
if (!row) {
continue;
}
try {
const h = getHandoff(c, c.state.workspaceId, c.state.repoId, row.handoffId);
const h = getTask(c, c.state.workspaceId, c.state.repoId, row.taskId);
await h.archive({ reason: `PR ${item.state.toLowerCase()}` });
} catch (error) {
if (isStaleHandoffReferenceError(error)) {
await deleteStaleHandoffIndexRow(c, row.handoffId);
logActorWarning("project", "pruned stale handoff index row during PR close archive", {
if (isStaleTaskReferenceError(error)) {
await deleteStaleTaskIndexRow(c, row.taskId);
logActorWarning("project", "pruned stale task index row during PR close archive", {
workspaceId: c.state.workspaceId,
repoId: c.state.repoId,
handoffId: row.handoffId,
taskId: row.taskId,
branchName: item.headRefName,
prState: item.state,
});
continue;
}
logActorWarning("project", "failed to auto-archive handoff after PR close", {
logActorWarning("project", "failed to auto-archive task after PR close", {
workspaceId: c.state.workspaceId,
repoId: c.state.repoId,
handoffId: row.handoffId,
taskId: row.taskId,
branchName: item.headRefName,
prState: item.state,
error: resolveErrorMessage(error),
@ -787,27 +787,27 @@ export async function runProjectWorkflow(ctx: any): Promise<void> {
return Loop.continue(undefined);
}
if (msg.name === "project.command.hydrateHandoffIndex") {
await loopCtx.step("project-hydrate-handoff-index", async () => hydrateHandoffIndexMutation(loopCtx, msg.body as HydrateHandoffIndexCommand));
if (msg.name === "project.command.hydrateTaskIndex") {
await loopCtx.step("project-hydrate-task-index", async () => hydrateTaskIndexMutation(loopCtx, msg.body as HydrateTaskIndexCommand));
await msg.complete({ ok: true });
return Loop.continue(undefined);
}
if (msg.name === "project.command.createHandoff") {
if (msg.name === "project.command.createTask") {
const result = await loopCtx.step({
name: "project-create-handoff",
name: "project-create-task",
timeout: 12 * 60_000,
run: async () => createHandoffMutation(loopCtx, msg.body as CreateHandoffCommand),
run: async () => createTaskMutation(loopCtx, msg.body as CreateTaskCommand),
});
await msg.complete(result);
return Loop.continue(undefined);
}
if (msg.name === "project.command.registerHandoffBranch") {
if (msg.name === "project.command.registerTaskBranch") {
const result = await loopCtx.step({
name: "project-register-handoff-branch",
name: "project-register-task-branch",
timeout: 5 * 60_000,
run: async () => registerHandoffBranchMutation(loopCtx, msg.body as RegisterHandoffBranchCommand),
run: async () => registerTaskBranchMutation(loopCtx, msg.body as RegisterTaskBranchCommand),
});
await msg.complete(result);
return Loop.continue(undefined);
@ -857,10 +857,10 @@ export const projectActions = {
);
},
async createHandoff(c: any, cmd: CreateHandoffCommand): Promise<HandoffRecord> {
async createTask(c: any, cmd: CreateTaskCommand): Promise<TaskRecord> {
const self = selfProject(c);
return expectQueueResponse<HandoffRecord>(
await self.send(projectWorkflowQueueName("project.command.createHandoff"), cmd, {
return expectQueueResponse<TaskRecord>(
await self.send(projectWorkflowQueueName("project.command.createTask"), cmd, {
wait: true,
timeout: 12 * 60_000,
}),
@ -868,42 +868,42 @@ export const projectActions = {
},
async listReservedBranches(c: any, _cmd?: ListReservedBranchesCommand): Promise<string[]> {
await ensureHandoffIndexHydratedForRead(c);
await ensureTaskIndexHydratedForRead(c);
const rows = await c.db.select({ branchName: handoffIndex.branchName }).from(handoffIndex).where(isNotNull(handoffIndex.branchName)).all();
const rows = await c.db.select({ branchName: taskIndex.branchName }).from(taskIndex).where(isNotNull(taskIndex.branchName)).all();
return rows.map((row) => row.branchName).filter((name): name is string => typeof name === "string" && name.trim().length > 0);
},
async registerHandoffBranch(c: any, cmd: RegisterHandoffBranchCommand): Promise<{ branchName: string; headSha: string }> {
async registerTaskBranch(c: any, cmd: RegisterTaskBranchCommand): Promise<{ branchName: string; headSha: string }> {
const self = selfProject(c);
return expectQueueResponse<{ branchName: string; headSha: string }>(
await self.send(projectWorkflowQueueName("project.command.registerHandoffBranch"), cmd, {
await self.send(projectWorkflowQueueName("project.command.registerTaskBranch"), cmd, {
wait: true,
timeout: 5 * 60_000,
}),
);
},
async hydrateHandoffIndex(c: any, cmd?: HydrateHandoffIndexCommand): Promise<void> {
async hydrateTaskIndex(c: any, cmd?: HydrateTaskIndexCommand): Promise<void> {
const self = selfProject(c);
await self.send(projectWorkflowQueueName("project.command.hydrateHandoffIndex"), cmd ?? {}, {
await self.send(projectWorkflowQueueName("project.command.hydrateTaskIndex"), cmd ?? {}, {
wait: true,
timeout: 60_000,
});
},
async listHandoffSummaries(c: any, cmd?: ListHandoffSummariesCommand): Promise<HandoffSummary[]> {
async listTaskSummaries(c: any, cmd?: ListTaskSummariesCommand): Promise<TaskSummary[]> {
const body = cmd ?? {};
const records: HandoffSummary[] = [];
const records: TaskSummary[] = [];
await ensureHandoffIndexHydratedForRead(c);
await ensureTaskIndexHydratedForRead(c);
const handoffRows = await c.db.select({ handoffId: handoffIndex.handoffId }).from(handoffIndex).orderBy(desc(handoffIndex.updatedAt)).all();
const taskRows = await c.db.select({ taskId: taskIndex.taskId }).from(taskIndex).orderBy(desc(taskIndex.updatedAt)).all();
for (const row of handoffRows) {
for (const row of taskRows) {
try {
const h = getHandoff(c, c.state.workspaceId, c.state.repoId, row.handoffId);
const h = getTask(c, c.state.workspaceId, c.state.repoId, row.taskId);
const record = await h.get();
if (!body.includeArchived && record.status === "archived") {
@ -913,26 +913,26 @@ export const projectActions = {
records.push({
workspaceId: record.workspaceId,
repoId: record.repoId,
handoffId: record.handoffId,
taskId: record.taskId,
branchName: record.branchName,
title: record.title,
status: record.status,
updatedAt: record.updatedAt,
});
} catch (error) {
if (isStaleHandoffReferenceError(error)) {
await deleteStaleHandoffIndexRow(c, row.handoffId);
logActorWarning("project", "pruned stale handoff index row during summary listing", {
if (isStaleTaskReferenceError(error)) {
await deleteStaleTaskIndexRow(c, row.taskId);
logActorWarning("project", "pruned stale task index row during summary listing", {
workspaceId: c.state.workspaceId,
repoId: c.state.repoId,
handoffId: row.handoffId,
taskId: row.taskId,
});
continue;
}
logActorWarning("project", "failed loading handoff summary row", {
logActorWarning("project", "failed loading task summary row", {
workspaceId: c.state.workspaceId,
repoId: c.state.repoId,
handoffId: row.handoffId,
taskId: row.taskId,
error: resolveErrorMessage(error),
});
}
@ -942,22 +942,22 @@ export const projectActions = {
return records;
},
async getHandoffEnriched(c: any, cmd: GetHandoffEnrichedCommand): Promise<HandoffRecord> {
await ensureHandoffIndexHydratedForRead(c);
async getTaskEnriched(c: any, cmd: GetTaskEnrichedCommand): Promise<TaskRecord> {
await ensureTaskIndexHydratedForRead(c);
const row = await c.db.select({ handoffId: handoffIndex.handoffId }).from(handoffIndex).where(eq(handoffIndex.handoffId, cmd.handoffId)).get();
const row = await c.db.select({ taskId: taskIndex.taskId }).from(taskIndex).where(eq(taskIndex.taskId, cmd.taskId)).get();
if (!row) {
throw new Error(`Unknown handoff in repo ${c.state.repoId}: ${cmd.handoffId}`);
throw new Error(`Unknown task in repo ${c.state.repoId}: ${cmd.taskId}`);
}
try {
const h = getHandoff(c, c.state.workspaceId, c.state.repoId, cmd.handoffId);
const h = getTask(c, c.state.workspaceId, c.state.repoId, cmd.taskId);
const record = await h.get();
return await enrichHandoffRecord(c, record);
return await enrichTaskRecord(c, record);
} catch (error) {
if (isStaleHandoffReferenceError(error)) {
await deleteStaleHandoffIndexRow(c, cmd.handoffId);
throw new Error(`Unknown handoff in repo ${c.state.repoId}: ${cmd.handoffId}`);
if (isStaleTaskReferenceError(error)) {
await deleteStaleTaskIndexRow(c, cmd.taskId);
throw new Error(`Unknown task in repo ${c.state.repoId}: ${cmd.taskId}`);
}
throw error;
}
@ -965,7 +965,7 @@ export const projectActions = {
async getRepoOverview(c: any, _cmd?: RepoOverviewCommand): Promise<RepoOverview> {
const localPath = await ensureProjectReadyForRead(c);
await ensureHandoffIndexHydratedForRead(c);
await ensureTaskIndexHydratedForRead(c);
await forceProjectSync(c, localPath);
const { driver } = getActorRuntimeContext();
@ -989,45 +989,45 @@ export const projectActions = {
.from(branches)
.all();
const handoffRows = await c.db
const taskRows = await c.db
.select({
handoffId: handoffIndex.handoffId,
branchName: handoffIndex.branchName,
updatedAt: handoffIndex.updatedAt,
taskId: taskIndex.taskId,
branchName: taskIndex.branchName,
updatedAt: taskIndex.updatedAt,
})
.from(handoffIndex)
.from(taskIndex)
.all();
const handoffMetaByBranch = new Map<string, { handoffId: string; title: string | null; status: HandoffRecord["status"] | null; updatedAt: number }>();
const taskMetaByBranch = new Map<string, { taskId: string; title: string | null; status: TaskRecord["status"] | null; updatedAt: number }>();
for (const row of handoffRows) {
for (const row of taskRows) {
if (!row.branchName) {
continue;
}
try {
const h = getHandoff(c, c.state.workspaceId, c.state.repoId, row.handoffId);
const h = getTask(c, c.state.workspaceId, c.state.repoId, row.taskId);
const record = await h.get();
handoffMetaByBranch.set(row.branchName, {
handoffId: row.handoffId,
taskMetaByBranch.set(row.branchName, {
taskId: row.taskId,
title: record.title ?? null,
status: record.status,
updatedAt: record.updatedAt,
});
} catch (error) {
if (isStaleHandoffReferenceError(error)) {
await deleteStaleHandoffIndexRow(c, row.handoffId);
logActorWarning("project", "pruned stale handoff index row during repo overview", {
if (isStaleTaskReferenceError(error)) {
await deleteStaleTaskIndexRow(c, row.taskId);
logActorWarning("project", "pruned stale task index row during repo overview", {
workspaceId: c.state.workspaceId,
repoId: c.state.repoId,
handoffId: row.handoffId,
taskId: row.taskId,
branchName: row.branchName,
});
continue;
}
logActorWarning("project", "failed loading handoff while building repo overview", {
logActorWarning("project", "failed loading task while building repo overview", {
workspaceId: c.state.workspaceId,
repoId: c.state.repoId,
handoffId: row.handoffId,
taskId: row.taskId,
branchName: row.branchName,
error: resolveErrorMessage(error),
});
@ -1060,7 +1060,7 @@ export const projectActions = {
const branchRows = combinedRows.map((ordering) => {
const row = detailByBranch.get(ordering.branchName)!;
const handoffMeta = handoffMetaByBranch.get(row.branchName);
const taskMeta = taskMetaByBranch.get(row.branchName);
const pr = prByBranch.get(row.branchName);
return {
branchName: row.branchName,
@ -1070,9 +1070,9 @@ export const projectActions = {
diffStat: row.diffStat ?? null,
hasUnpushed: Boolean(row.hasUnpushed),
conflictsWithMain: Boolean(row.conflictsWithMain),
handoffId: handoffMeta?.handoffId ?? null,
handoffTitle: handoffMeta?.title ?? null,
handoffStatus: handoffMeta?.status ?? null,
taskId: taskMeta?.taskId ?? null,
taskTitle: taskMeta?.title ?? null,
taskStatus: taskMeta?.status ?? null,
prNumber: pr?.prNumber ?? null,
prState: pr?.prState ?? null,
prUrl: pr?.prUrl ?? null,
@ -1081,7 +1081,7 @@ export const projectActions = {
reviewer: pr?.reviewer ?? null,
firstSeenAt: row.firstSeenAt ?? null,
lastSeenAt: row.lastSeenAt ?? null,
updatedAt: Math.max(row.updatedAt, handoffMeta?.updatedAt ?? 0),
updatedAt: Math.max(row.updatedAt, taskMeta?.updatedAt ?? 0),
};
});

View file

@ -0,0 +1,5 @@
import { db } from "rivetkit/db/drizzle";
import * as schema from "./schema.js";
import migrations from "./migrations.js";
export const projectDb = db({ schema, migrations });

View file

@ -1,5 +1,5 @@
CREATE TABLE `handoff_index` (
`handoff_id` text PRIMARY KEY NOT NULL,
CREATE TABLE `task_index` (
`task_id` text PRIMARY KEY NOT NULL,
`branch_name` text,
`created_at` integer NOT NULL,
`updated_at` integer NOT NULL

View file

@ -77,11 +77,11 @@
"uniqueConstraints": {},
"checkConstraints": {}
},
"handoff_index": {
"name": "handoff_index",
"task_index": {
"name": "task_index",
"columns": {
"handoff_id": {
"name": "handoff_id",
"task_id": {
"name": "task_id",
"type": "text",
"primaryKey": true,
"notNull": true,

View file

@ -69,8 +69,8 @@ CREATE TABLE \`pr_cache\` (
);
--> statement-breakpoint
ALTER TABLE \`branches\` DROP COLUMN \`worktree_path\`;`,
m0002: `CREATE TABLE \`handoff_index\` (
\`handoff_id\` text PRIMARY KEY NOT NULL,
m0002: `CREATE TABLE \`task_index\` (
\`task_id\` text PRIMARY KEY NOT NULL,
\`branch_name\` text,
\`created_at\` integer NOT NULL,
\`updated_at\` integer NOT NULL

View file

@ -36,8 +36,8 @@ export const prCache = sqliteTable("pr_cache", {
updatedAt: integer("updated_at").notNull(),
});
export const handoffIndex = sqliteTable("handoff_index", {
handoffId: text("handoff_id").notNull().primaryKey(),
export const taskIndex = sqliteTable("task_index", {
taskId: text("task_id").notNull().primaryKey(),
branchName: text("branch_name"),
createdAt: integer("created_at").notNull(),
updatedAt: integer("updated_at").notNull(),

View file

@ -21,7 +21,7 @@ export const project = actor({
remoteUrl: input.remoteUrl,
localPath: null as string | null,
syncActorsStarted: false,
handoffIndexHydrated: false,
taskIndexHydrated: false,
}),
actions: projectActions,
run: workflow(runProjectWorkflow),

View file

@ -0,0 +1,5 @@
import { db } from "rivetkit/db/drizzle";
import * as schema from "./schema.js";
import migrations from "./migrations.js";
export const sandboxInstanceDb = db({ schema, migrations });

View file

@ -1,4 +1,4 @@
import { integer, sqliteTable, text } from "drizzle-orm/sqlite-core";
import { integer, sqliteTable, text } from "rivetkit/db/drizzle";
// SQLite is per sandbox-instance actor instance.
export const sandboxInstance = sqliteTable("sandbox_instance", {

View file

@ -2,7 +2,7 @@ import { setTimeout as delay } from "node:timers/promises";
import { eq } from "drizzle-orm";
import { actor, queue } from "rivetkit";
import { Loop, workflow } from "rivetkit/workflow";
import type { ProviderId } from "@openhandoff/shared";
import type { ProviderId } from "@sandbox-agent/foundry-shared";
import type {
ProcessCreateRequest,
ProcessInfo,
@ -482,28 +482,19 @@ export const sandboxInstance = actor({
return await client.listProcesses();
},
async getProcessLogs(
c: any,
request: { processId: string; query?: ProcessLogFollowQuery }
): Promise<ProcessLogsResponse> {
async getProcessLogs(c: any, request: { processId: string; query?: ProcessLogFollowQuery }): Promise<ProcessLogsResponse> {
const client = await getSandboxAgentClient(c);
return await client.getProcessLogs(request.processId, request.query);
},
async stopProcess(
c: any,
request: { processId: string; query?: ProcessSignalQuery }
): Promise<ProcessInfo> {
async stopProcess(c: any, request: { processId: string; query?: ProcessSignalQuery }): Promise<ProcessInfo> {
const client = await getSandboxAgentClient(c);
const stopped = await client.stopProcess(request.processId, request.query);
broadcastProcessesUpdated(c);
return stopped;
},
async killProcess(
c: any,
request: { processId: string; query?: ProcessSignalQuery }
): Promise<ProcessInfo> {
async killProcess(c: any, request: { processId: string; query?: ProcessSignalQuery }): Promise<ProcessInfo> {
const client = await getSandboxAgentClient(c);
const killed = await client.killProcess(request.processId, request.query);
broadcastProcessesUpdated(c);

View file

@ -1,14 +1,14 @@
import { actor, queue } from "rivetkit";
import { workflow } from "rivetkit/workflow";
import type { ProviderId } from "@openhandoff/shared";
import { getHandoff, getSandboxInstance, selfHandoffStatusSync } from "../handles.js";
import type { ProviderId } from "@sandbox-agent/foundry-shared";
import { getTask, getSandboxInstance, selfTaskStatusSync } from "../handles.js";
import { logActorWarning, resolveErrorMessage, resolveErrorStack } from "../logging.js";
import { type PollingControlState, runWorkflowPollingLoop } from "../polling.js";
export interface HandoffStatusSyncInput {
export interface TaskStatusSyncInput {
workspaceId: string;
repoId: string;
handoffId: string;
taskId: string;
providerId: ProviderId;
sandboxId: string;
sessionId: string;
@ -19,27 +19,27 @@ interface SetIntervalCommand {
intervalMs: number;
}
interface HandoffStatusSyncState extends PollingControlState {
interface TaskStatusSyncState extends PollingControlState {
workspaceId: string;
repoId: string;
handoffId: string;
taskId: string;
providerId: ProviderId;
sandboxId: string;
sessionId: string;
}
const CONTROL = {
start: "handoff.status_sync.control.start",
stop: "handoff.status_sync.control.stop",
setInterval: "handoff.status_sync.control.set_interval",
force: "handoff.status_sync.control.force",
start: "task.status_sync.control.start",
stop: "task.status_sync.control.stop",
setInterval: "task.status_sync.control.set_interval",
force: "task.status_sync.control.force",
} as const;
async function pollSessionStatus(c: { state: HandoffStatusSyncState }): Promise<void> {
async function pollSessionStatus(c: { state: TaskStatusSyncState }): Promise<void> {
const sandboxInstance = getSandboxInstance(c, c.state.workspaceId, c.state.providerId, c.state.sandboxId);
const status = await sandboxInstance.sessionStatus({ sessionId: c.state.sessionId });
const parent = getHandoff(c, c.state.workspaceId, c.state.repoId, c.state.handoffId);
const parent = getTask(c, c.state.workspaceId, c.state.repoId, c.state.taskId);
await parent.syncWorkbenchSessionStatus({
sessionId: c.state.sessionId,
status: status.status,
@ -47,7 +47,7 @@ async function pollSessionStatus(c: { state: HandoffStatusSyncState }): Promise<
});
}
export const handoffStatusSync = actor({
export const taskStatusSync = actor({
queues: {
[CONTROL.start]: queue(),
[CONTROL.stop]: queue(),
@ -58,10 +58,10 @@ export const handoffStatusSync = actor({
// Polling actors rely on timer-based wakeups; sleeping would pause the timer and stop polling.
noSleep: true,
},
createState: (_c, input: HandoffStatusSyncInput): HandoffStatusSyncState => ({
createState: (_c, input: TaskStatusSyncInput): TaskStatusSyncState => ({
workspaceId: input.workspaceId,
repoId: input.repoId,
handoffId: input.handoffId,
taskId: input.taskId,
providerId: input.providerId,
sandboxId: input.sandboxId,
sessionId: input.sessionId,
@ -70,34 +70,34 @@ export const handoffStatusSync = actor({
}),
actions: {
async start(c): Promise<void> {
const self = selfHandoffStatusSync(c);
const self = selfTaskStatusSync(c);
await self.send(CONTROL.start, {}, { wait: true, timeout: 15_000 });
},
async stop(c): Promise<void> {
const self = selfHandoffStatusSync(c);
const self = selfTaskStatusSync(c);
await self.send(CONTROL.stop, {}, { wait: true, timeout: 15_000 });
},
async setIntervalMs(c, payload: SetIntervalCommand): Promise<void> {
const self = selfHandoffStatusSync(c);
const self = selfTaskStatusSync(c);
await self.send(CONTROL.setInterval, payload, { wait: true, timeout: 15_000 });
},
async force(c): Promise<void> {
const self = selfHandoffStatusSync(c);
const self = selfTaskStatusSync(c);
await self.send(CONTROL.force, {}, { wait: true, timeout: 5 * 60_000 });
},
},
run: workflow(async (ctx) => {
await runWorkflowPollingLoop<HandoffStatusSyncState>(ctx, {
loopName: "handoff-status-sync-loop",
await runWorkflowPollingLoop<TaskStatusSyncState>(ctx, {
loopName: "task-status-sync-loop",
control: CONTROL,
onPoll: async (loopCtx) => {
try {
await pollSessionStatus(loopCtx);
} catch (error) {
logActorWarning("handoff-status-sync", "poll failed", {
logActorWarning("task-status-sync", "poll failed", {
error: resolveErrorMessage(error),
stack: resolveErrorStack(error),
});

View file

@ -0,0 +1,5 @@
import { db } from "rivetkit/db/drizzle";
import * as schema from "./schema.js";
import migrations from "./migrations.js";
export const taskDb = db({ schema, migrations });

View file

@ -0,0 +1,6 @@
import { defineConfig } from "rivetkit/db/drizzle";
export default defineConfig({
out: "./src/actors/task/db/drizzle",
schema: "./src/actors/task/db/schema.ts",
});

View file

@ -1,4 +1,4 @@
CREATE TABLE `handoff` (
CREATE TABLE `task` (
`id` integer PRIMARY KEY NOT NULL,
`branch_name` text NOT NULL,
`title` text NOT NULL,
@ -14,7 +14,7 @@ CREATE TABLE `handoff` (
`updated_at` integer NOT NULL
);
--> statement-breakpoint
CREATE TABLE `handoff_runtime` (
CREATE TABLE `task_runtime` (
`id` integer PRIMARY KEY NOT NULL,
`sandbox_id` text,
`session_id` text,

View file

@ -0,0 +1,3 @@
ALTER TABLE `task` DROP COLUMN `auto_committed`;--> statement-breakpoint
ALTER TABLE `task` DROP COLUMN `pushed`;--> statement-breakpoint
ALTER TABLE `task` DROP COLUMN `needs_push`;

View file

@ -1,7 +1,7 @@
ALTER TABLE `handoff_runtime` RENAME COLUMN "sandbox_id" TO "active_sandbox_id";--> statement-breakpoint
ALTER TABLE `handoff_runtime` RENAME COLUMN "session_id" TO "active_session_id";--> statement-breakpoint
ALTER TABLE `handoff_runtime` RENAME COLUMN "switch_target" TO "active_switch_target";--> statement-breakpoint
CREATE TABLE `handoff_sandboxes` (
ALTER TABLE `task_runtime` RENAME COLUMN "sandbox_id" TO "active_sandbox_id";--> statement-breakpoint
ALTER TABLE `task_runtime` RENAME COLUMN "session_id" TO "active_session_id";--> statement-breakpoint
ALTER TABLE `task_runtime` RENAME COLUMN "switch_target" TO "active_switch_target";--> statement-breakpoint
CREATE TABLE `task_sandboxes` (
`sandbox_id` text PRIMARY KEY NOT NULL,
`provider_id` text NOT NULL,
`switch_target` text NOT NULL,
@ -11,9 +11,9 @@ CREATE TABLE `handoff_sandboxes` (
`updated_at` integer NOT NULL
);
--> statement-breakpoint
ALTER TABLE `handoff_runtime` ADD `active_cwd` text;
ALTER TABLE `task_runtime` ADD `active_cwd` text;
--> statement-breakpoint
INSERT INTO `handoff_sandboxes` (
INSERT INTO `task_sandboxes` (
`sandbox_id`,
`provider_id`,
`switch_target`,
@ -24,13 +24,13 @@ INSERT INTO `handoff_sandboxes` (
)
SELECT
r.`active_sandbox_id`,
(SELECT h.`provider_id` FROM `handoff` h WHERE h.`id` = 1),
(SELECT h.`provider_id` FROM `task` h WHERE h.`id` = 1),
r.`active_switch_target`,
r.`active_cwd`,
r.`status_message`,
COALESCE((SELECT h.`created_at` FROM `handoff` h WHERE h.`id` = 1), r.`updated_at`),
COALESCE((SELECT h.`created_at` FROM `task` h WHERE h.`id` = 1), r.`updated_at`),
r.`updated_at`
FROM `handoff_runtime` r
FROM `task_runtime` r
WHERE
r.`id` = 1
AND r.`active_sandbox_id` IS NOT NULL

View file

@ -1,9 +1,9 @@
-- Allow handoffs to exist before their branch/title are determined.
-- Allow tasks to exist before their branch/title are determined.
-- Drizzle doesn't support altering column nullability in SQLite directly, so rebuild the table.
PRAGMA foreign_keys=off;
CREATE TABLE `handoff__new` (
CREATE TABLE `task__new` (
`id` integer PRIMARY KEY NOT NULL,
`branch_name` text,
`title` text,
@ -16,7 +16,7 @@ CREATE TABLE `handoff__new` (
`updated_at` integer NOT NULL
);
INSERT INTO `handoff__new` (
INSERT INTO `task__new` (
`id`,
`branch_name`,
`title`,
@ -39,10 +39,10 @@ SELECT
`pr_submitted`,
`created_at`,
`updated_at`
FROM `handoff`;
FROM `task`;
DROP TABLE `handoff`;
ALTER TABLE `handoff__new` RENAME TO `handoff`;
DROP TABLE `task`;
ALTER TABLE `task__new` RENAME TO `task`;
PRAGMA foreign_keys=on;

View file

@ -5,10 +5,10 @@
PRAGMA foreign_keys=off;
--> statement-breakpoint
DROP TABLE IF EXISTS `handoff__new`;
DROP TABLE IF EXISTS `task__new`;
--> statement-breakpoint
CREATE TABLE `handoff__new` (
CREATE TABLE `task__new` (
`id` integer PRIMARY KEY NOT NULL,
`branch_name` text,
`title` text,
@ -22,7 +22,7 @@ CREATE TABLE `handoff__new` (
);
--> statement-breakpoint
INSERT INTO `handoff__new` (
INSERT INTO `task__new` (
`id`,
`branch_name`,
`title`,
@ -45,13 +45,13 @@ SELECT
`pr_submitted`,
`created_at`,
`updated_at`
FROM `handoff`;
FROM `task`;
--> statement-breakpoint
DROP TABLE `handoff`;
DROP TABLE `task`;
--> statement-breakpoint
ALTER TABLE `handoff__new` RENAME TO `handoff`;
ALTER TABLE `task__new` RENAME TO `task`;
--> statement-breakpoint
PRAGMA foreign_keys=on;

View file

@ -0,0 +1 @@
ALTER TABLE `task_sandboxes` ADD `sandbox_actor_id` text;

View file

@ -1,4 +1,4 @@
CREATE TABLE `handoff_workbench_sessions` (
CREATE TABLE `task_workbench_sessions` (
`session_id` text PRIMARY KEY NOT NULL,
`session_name` text NOT NULL,
`model` text NOT NULL,

Some files were not shown because too many files have changed in this diff Show more