mirror of
https://github.com/harivansh-afk/sandbox-agent.git
synced 2026-04-15 07:04:48 +00:00
wip (#256)
This commit is contained in:
parent
99abb9d42e
commit
57a07f6a0a
11 changed files with 206 additions and 113 deletions
|
|
@ -16,6 +16,47 @@ OrganizationActor
|
||||||
└─ SandboxInstanceActor(sandboxProviderId, sandboxId) × N
|
└─ SandboxInstanceActor(sandboxProviderId, sandboxId) × N
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Coordinator Pattern
|
||||||
|
|
||||||
|
Actors follow a coordinator pattern where each coordinator is responsible for:
|
||||||
|
1. **Index tables** — keeping a local SQLite index/summary of its child actors' data
|
||||||
|
2. **Create/destroy** — handling lifecycle of child actors
|
||||||
|
3. **Routing** — resolving lookups to the correct child actor
|
||||||
|
|
||||||
|
Children push updates **up** to their direct coordinator only. Coordinators broadcast changes to connected clients. This keeps the read path local (no fan-out to children).
|
||||||
|
|
||||||
|
### Coordinator hierarchy and index tables
|
||||||
|
|
||||||
|
```text
|
||||||
|
OrganizationActor (coordinator for repos + auth users)
|
||||||
|
│
|
||||||
|
│ Index tables:
|
||||||
|
│ ├─ repos → RepositoryActor index (repo catalog)
|
||||||
|
│ ├─ taskLookup → TaskActor index (taskId → repoId routing)
|
||||||
|
│ ├─ taskSummaries → TaskActor index (materialized sidebar projection)
|
||||||
|
│ ├─ authSessionIndex → AuthUserActor index (session token → userId)
|
||||||
|
│ ├─ authEmailIndex → AuthUserActor index (email → userId)
|
||||||
|
│ └─ authAccountIndex → AuthUserActor index (OAuth account → userId)
|
||||||
|
│
|
||||||
|
├─ RepositoryActor (coordinator for tasks)
|
||||||
|
│ │
|
||||||
|
│ │ Index tables:
|
||||||
|
│ │ └─ taskIndex → TaskActor index (taskId → branchName)
|
||||||
|
│ │
|
||||||
|
│ └─ TaskActor (coordinator for sessions + sandboxes)
|
||||||
|
│ │
|
||||||
|
│ │ Index tables:
|
||||||
|
│ │ ├─ taskWorkbenchSessions → Session index (session metadata, transcript, draft)
|
||||||
|
│ │ └─ taskSandboxes → SandboxInstanceActor index (sandbox history)
|
||||||
|
│ │
|
||||||
|
│ └─ SandboxInstanceActor (leaf)
|
||||||
|
│
|
||||||
|
├─ HistoryActor (organization-scoped audit log, not a coordinator)
|
||||||
|
└─ GithubDataActor (GitHub API cache, not a coordinator)
|
||||||
|
```
|
||||||
|
|
||||||
|
When adding a new index table, annotate it in the schema file with a doc comment identifying it as a coordinator index and which child actor it indexes (see existing examples).
|
||||||
|
|
||||||
## Ownership Rules
|
## Ownership Rules
|
||||||
|
|
||||||
- `OrganizationActor` is the organization coordinator and lookup/index owner.
|
- `OrganizationActor` is the organization coordinator and lookup/index owner.
|
||||||
|
|
@ -29,8 +70,24 @@ OrganizationActor
|
||||||
- `SandboxInstanceActor` stays separate from `TaskActor`; tasks/sessions reference it by identity.
|
- `SandboxInstanceActor` stays separate from `TaskActor`; tasks/sessions reference it by identity.
|
||||||
- The backend stores no local git state. No clones, no refs, no working trees, and no git-spice. Repository metadata comes from GitHub API data and webhook events. Any working-tree git operation runs inside a sandbox via `executeInSandbox()`.
|
- The backend stores no local git state. No clones, no refs, no working trees, and no git-spice. Repository metadata comes from GitHub API data and webhook events. Any working-tree git operation runs inside a sandbox via `executeInSandbox()`.
|
||||||
- When a backend request path must aggregate multiple independent actor calls or reads, prefer bounded parallelism over sequential fan-out when correctness permits. Do not serialize independent work by default.
|
- When a backend request path must aggregate multiple independent actor calls or reads, prefer bounded parallelism over sequential fan-out when correctness permits. Do not serialize independent work by default.
|
||||||
|
- Only a coordinator creates/destroys its children. Do not create child actors from outside the coordinator.
|
||||||
|
- Children push state changes up to their direct coordinator only — never skip levels (e.g., task pushes to repo, not directly to org, unless org is the direct coordinator for that index).
|
||||||
|
- Read paths must use the coordinator's local index tables. Do not fan out to child actors on the hot read path.
|
||||||
|
- Never build "enriched" read actions that chain through multiple actors (e.g., coordinator → child actor → sibling actor). If data from multiple actors is needed for a read, it should already be materialized in the coordinator's index tables via push updates. If it's not there, fix the write path to push it — do not add a fan-out read path.
|
||||||
|
|
||||||
|
## Multiplayer Correctness
|
||||||
|
|
||||||
|
Per-user UI state must live on the user actor, not on shared task/session actors. This is critical for multiplayer — multiple users may view the same task simultaneously with different active sessions, unread states, and in-progress drafts.
|
||||||
|
|
||||||
|
**Per-user state (user actor):** active session tab, unread counts, draft text, draft attachments. Keyed by `(userId, taskId, sessionId)`.
|
||||||
|
|
||||||
|
**Task-global state (task actor):** session transcript, session model, session runtime status, sandbox identity, task status, branch name, PR state. These are shared across all users viewing the task — that is correct behavior.
|
||||||
|
|
||||||
|
Do not store per-user preferences, selections, or ephemeral UI state on shared actors. If a field's value should differ between two users looking at the same task, it belongs on the user actor.
|
||||||
|
|
||||||
## Maintenance
|
## Maintenance
|
||||||
|
|
||||||
- Keep this file up to date whenever actor ownership, hierarchy, or lifecycle responsibilities change.
|
- Keep this file up to date whenever actor ownership, hierarchy, or lifecycle responsibilities change.
|
||||||
- If the real actor tree diverges from this document, update this document in the same change.
|
- If the real actor tree diverges from this document, update this document in the same change.
|
||||||
|
- When adding, removing, or renaming coordinator index tables, update the hierarchy diagram above in the same change.
|
||||||
|
- When adding a new coordinator index table in a schema file, add a doc comment identifying which child actor it indexes (pattern: `/** Coordinator index of {ChildActor} instances. ... */`).
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
// @ts-nocheck
|
// @ts-nocheck
|
||||||
import { eq } from "drizzle-orm";
|
import { eq } from "drizzle-orm";
|
||||||
import { actor } from "rivetkit";
|
import { actor, queue } from "rivetkit";
|
||||||
|
import { workflow, Loop } from "rivetkit/workflow";
|
||||||
import type { FoundryOrganization } from "@sandbox-agent/foundry-shared";
|
import type { FoundryOrganization } from "@sandbox-agent/foundry-shared";
|
||||||
import { getActorRuntimeContext } from "../context.js";
|
import { getActorRuntimeContext } from "../context.js";
|
||||||
import { getOrCreateOrganization, getTask } from "../handles.js";
|
import { getOrCreateOrganization, getTask } from "../handles.js";
|
||||||
|
|
@ -536,8 +537,69 @@ async function runFullSync(c: any, input: FullSyncInput = {}) {
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const GITHUB_DATA_QUEUE_NAMES = ["githubData.command.syncRepos"] as const;
|
||||||
|
|
||||||
|
async function runGithubDataWorkflow(ctx: any): Promise<void> {
|
||||||
|
// Initial sync: if this actor was just created and has never synced,
|
||||||
|
// kick off the first full sync automatically.
|
||||||
|
await ctx.step({
|
||||||
|
name: "github-data-initial-sync",
|
||||||
|
timeout: 5 * 60_000,
|
||||||
|
run: async () => {
|
||||||
|
const meta = await readMeta(ctx);
|
||||||
|
if (meta.syncStatus !== "pending") {
|
||||||
|
return; // Already synced or syncing — skip initial sync
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
await runFullSync(ctx, { label: "Importing repository catalog..." });
|
||||||
|
} catch (error) {
|
||||||
|
// Best-effort initial sync. Write the error to meta so the client
|
||||||
|
// sees the failure and can trigger a manual retry.
|
||||||
|
const currentMeta = await readMeta(ctx);
|
||||||
|
const organization = await getOrCreateOrganization(ctx, ctx.state.organizationId);
|
||||||
|
await organization.markOrganizationSyncFailed({
|
||||||
|
message: error instanceof Error ? error.message : "GitHub import failed",
|
||||||
|
installationStatus: currentMeta.installationStatus,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
// Command loop for explicit sync requests (reload, re-import, etc.)
|
||||||
|
await ctx.loop("github-data-command-loop", async (loopCtx: any) => {
|
||||||
|
const msg = await loopCtx.queue.next("next-github-data-command", {
|
||||||
|
names: [...GITHUB_DATA_QUEUE_NAMES],
|
||||||
|
completable: true,
|
||||||
|
});
|
||||||
|
if (!msg) {
|
||||||
|
return Loop.continue(undefined);
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
if (msg.name === "githubData.command.syncRepos") {
|
||||||
|
await loopCtx.step({
|
||||||
|
name: "github-data-sync-repos",
|
||||||
|
timeout: 5 * 60_000,
|
||||||
|
run: async () => {
|
||||||
|
const body = msg.body as FullSyncInput;
|
||||||
|
await runFullSync(loopCtx, body);
|
||||||
|
},
|
||||||
|
});
|
||||||
|
await msg.complete({ ok: true });
|
||||||
|
return Loop.continue(undefined);
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
const message = error instanceof Error ? error.message : String(error);
|
||||||
|
await msg.complete({ error: message }).catch(() => {});
|
||||||
|
}
|
||||||
|
|
||||||
|
return Loop.continue(undefined);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
export const githubData = actor({
|
export const githubData = actor({
|
||||||
db: githubDataDb,
|
db: githubDataDb,
|
||||||
|
queues: Object.fromEntries(GITHUB_DATA_QUEUE_NAMES.map((name) => [name, queue()])),
|
||||||
options: {
|
options: {
|
||||||
name: "GitHub Data",
|
name: "GitHub Data",
|
||||||
icon: "github",
|
icon: "github",
|
||||||
|
|
@ -546,6 +608,7 @@ export const githubData = actor({
|
||||||
createState: (_c, input: GithubDataInput) => ({
|
createState: (_c, input: GithubDataInput) => ({
|
||||||
organizationId: input.organizationId,
|
organizationId: input.organizationId,
|
||||||
}),
|
}),
|
||||||
|
run: workflow(runGithubDataWorkflow),
|
||||||
actions: {
|
actions: {
|
||||||
async getSummary(c) {
|
async getSummary(c) {
|
||||||
const repositories = await c.db.select().from(githubRepositories).all();
|
const repositories = await c.db.select().from(githubRepositories).all();
|
||||||
|
|
|
||||||
|
|
@ -61,11 +61,7 @@ interface RepoOverviewInput {
|
||||||
repoId: string;
|
repoId: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
const ORGANIZATION_QUEUE_NAMES = [
|
const ORGANIZATION_QUEUE_NAMES = ["organization.command.createTask", "organization.command.syncGithubSession"] as const;
|
||||||
"organization.command.createTask",
|
|
||||||
"organization.command.syncGithubOrganizationRepos",
|
|
||||||
"organization.command.syncGithubSession",
|
|
||||||
] as const;
|
|
||||||
const SANDBOX_AGENT_REPO = "rivet-dev/sandbox-agent";
|
const SANDBOX_AGENT_REPO = "rivet-dev/sandbox-agent";
|
||||||
|
|
||||||
type OrganizationQueueName = (typeof ORGANIZATION_QUEUE_NAMES)[number];
|
type OrganizationQueueName = (typeof ORGANIZATION_QUEUE_NAMES)[number];
|
||||||
|
|
@ -384,19 +380,6 @@ export async function runOrganizationWorkflow(ctx: any): Promise<void> {
|
||||||
await msg.complete({ ok: true });
|
await msg.complete({ ok: true });
|
||||||
return Loop.continue(undefined);
|
return Loop.continue(undefined);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (msg.name === "organization.command.syncGithubOrganizationRepos") {
|
|
||||||
await loopCtx.step({
|
|
||||||
name: "organization-sync-github-organization-repos",
|
|
||||||
timeout: 60_000,
|
|
||||||
run: async () => {
|
|
||||||
const { syncGithubOrganizationRepos } = await import("./app-shell.js");
|
|
||||||
await syncGithubOrganizationRepos(loopCtx, msg.body as { sessionId: string; organizationId: string });
|
|
||||||
},
|
|
||||||
});
|
|
||||||
await msg.complete({ ok: true });
|
|
||||||
return Loop.continue(undefined);
|
|
||||||
}
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
const message = resolveErrorMessage(error);
|
const message = resolveErrorMessage(error);
|
||||||
logActorWarning("organization", "organization workflow command failed", {
|
logActorWarning("organization", "organization workflow command failed", {
|
||||||
|
|
|
||||||
|
|
@ -596,49 +596,6 @@ async function syncGithubOrganizationsInternal(c: any, input: { sessionId: strin
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function syncGithubOrganizationRepos(c: any, input: { sessionId: string; organizationId: string }): Promise<void> {
|
|
||||||
assertAppOrganization(c);
|
|
||||||
const session = await requireSignedInSession(c, input.sessionId);
|
|
||||||
requireEligibleOrganization(session, input.organizationId);
|
|
||||||
|
|
||||||
const organizationHandle = await getOrCreateOrganization(c, input.organizationId);
|
|
||||||
const organizationState = await getOrganizationState(organizationHandle);
|
|
||||||
const githubData = await getOrCreateGithubData(c, input.organizationId);
|
|
||||||
|
|
||||||
try {
|
|
||||||
await githubData.fullSync({
|
|
||||||
accessToken: session.githubAccessToken,
|
|
||||||
connectedAccount: organizationState.snapshot.github.connectedAccount,
|
|
||||||
installationId: organizationState.githubInstallationId,
|
|
||||||
installationStatus: organizationState.snapshot.github.installationStatus,
|
|
||||||
githubLogin: organizationState.githubLogin,
|
|
||||||
kind: organizationState.snapshot.kind,
|
|
||||||
label: "Importing repository catalog...",
|
|
||||||
});
|
|
||||||
|
|
||||||
// Broadcast updated app snapshot so connected clients see the new repos
|
|
||||||
c.broadcast("appUpdated", {
|
|
||||||
type: "appUpdated",
|
|
||||||
snapshot: await buildAppSnapshot(c, input.sessionId),
|
|
||||||
});
|
|
||||||
} catch (error) {
|
|
||||||
const installationStatus =
|
|
||||||
error instanceof GitHubAppError && (error.status === 403 || error.status === 404)
|
|
||||||
? "reconnect_required"
|
|
||||||
: organizationState.snapshot.github.installationStatus;
|
|
||||||
await organizationHandle.markOrganizationSyncFailed({
|
|
||||||
message: error instanceof Error ? error.message : "GitHub import failed",
|
|
||||||
installationStatus,
|
|
||||||
});
|
|
||||||
|
|
||||||
// Broadcast sync failure so the client updates status
|
|
||||||
c.broadcast("appUpdated", {
|
|
||||||
type: "appUpdated",
|
|
||||||
snapshot: await buildAppSnapshot(c, input.sessionId),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function readOrganizationProfileRow(c: any) {
|
async function readOrganizationProfileRow(c: any) {
|
||||||
assertOrganizationShell(c);
|
assertOrganizationShell(c);
|
||||||
return await c.db.select().from(organizationProfile).where(eq(organizationProfile.id, PROFILE_ROW_ID)).get();
|
return await c.db.select().from(organizationProfile).where(eq(organizationProfile.id, PROFILE_ROW_ID)).get();
|
||||||
|
|
@ -1113,26 +1070,11 @@ export const organizationAppActions = {
|
||||||
requireEligibleOrganization(session, input.organizationId);
|
requireEligibleOrganization(session, input.organizationId);
|
||||||
await getBetterAuthService().setActiveOrganization(input.sessionId, input.organizationId);
|
await getBetterAuthService().setActiveOrganization(input.sessionId, input.organizationId);
|
||||||
|
|
||||||
const organizationHandle = await getOrCreateOrganization(c, input.organizationId);
|
// Ensure the GitHub data actor exists. If it's newly created, its own
|
||||||
const organizationState = await getOrganizationState(organizationHandle);
|
// workflow will detect the pending sync status and run the initial
|
||||||
if (organizationState.snapshot.github.syncStatus !== "synced") {
|
// full sync automatically — no orchestration needed here.
|
||||||
if (organizationState.snapshot.github.syncStatus !== "syncing") {
|
await getOrCreateGithubData(c, input.organizationId);
|
||||||
await organizationHandle.markOrganizationSyncStarted({
|
|
||||||
label: "Importing repository catalog...",
|
|
||||||
});
|
|
||||||
|
|
||||||
const self = selfOrganization(c);
|
|
||||||
await self.send(
|
|
||||||
"organization.command.syncGithubOrganizationRepos",
|
|
||||||
{ sessionId: input.sessionId, organizationId: input.organizationId },
|
|
||||||
{
|
|
||||||
wait: false,
|
|
||||||
},
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
return await buildAppSnapshot(c, input.sessionId);
|
|
||||||
}
|
|
||||||
return await buildAppSnapshot(c, input.sessionId);
|
return await buildAppSnapshot(c, input.sessionId);
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
@ -1157,24 +1099,20 @@ export const organizationAppActions = {
|
||||||
const session = await requireSignedInSession(c, input.sessionId);
|
const session = await requireSignedInSession(c, input.sessionId);
|
||||||
requireEligibleOrganization(session, input.organizationId);
|
requireEligibleOrganization(session, input.organizationId);
|
||||||
|
|
||||||
const organizationHandle = await getOrCreateOrganization(c, input.organizationId);
|
const githubData = await getOrCreateGithubData(c, input.organizationId);
|
||||||
const organizationState = await getOrganizationState(organizationHandle);
|
const summary = await githubData.getSummary({});
|
||||||
if (organizationState.snapshot.github.syncStatus === "syncing") {
|
if (summary.syncStatus === "syncing") {
|
||||||
return await buildAppSnapshot(c, input.sessionId);
|
return await buildAppSnapshot(c, input.sessionId);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Mark sync started on the organization, then send directly to the
|
||||||
|
// GitHub data actor's own workflow queue.
|
||||||
|
const organizationHandle = await getOrCreateOrganization(c, input.organizationId);
|
||||||
await organizationHandle.markOrganizationSyncStarted({
|
await organizationHandle.markOrganizationSyncStarted({
|
||||||
label: "Importing repository catalog...",
|
label: "Importing repository catalog...",
|
||||||
});
|
});
|
||||||
|
|
||||||
const self = selfOrganization(c);
|
await githubData.send("githubData.command.syncRepos", { label: "Importing repository catalog..." }, { wait: false });
|
||||||
await self.send(
|
|
||||||
"organization.command.syncGithubOrganizationRepos",
|
|
||||||
{ sessionId: input.sessionId, organizationId: input.organizationId },
|
|
||||||
{
|
|
||||||
wait: false,
|
|
||||||
},
|
|
||||||
);
|
|
||||||
|
|
||||||
return await buildAppSnapshot(c, input.sessionId);
|
return await buildAppSnapshot(c, input.sessionId);
|
||||||
},
|
},
|
||||||
|
|
|
||||||
|
|
@ -2,6 +2,11 @@ import { integer, sqliteTable, text } from "rivetkit/db/drizzle";
|
||||||
|
|
||||||
// SQLite is per organization actor instance, so no organizationId column needed.
|
// SQLite is per organization actor instance, so no organizationId column needed.
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Coordinator index of RepositoryActor instances.
|
||||||
|
* The organization actor is the coordinator for repositories.
|
||||||
|
* Rows are created/removed when repos are added/removed from the organization.
|
||||||
|
*/
|
||||||
export const repos = sqliteTable("repos", {
|
export const repos = sqliteTable("repos", {
|
||||||
repoId: text("repo_id").notNull().primaryKey(),
|
repoId: text("repo_id").notNull().primaryKey(),
|
||||||
remoteUrl: text("remote_url").notNull(),
|
remoteUrl: text("remote_url").notNull(),
|
||||||
|
|
@ -9,15 +14,21 @@ export const repos = sqliteTable("repos", {
|
||||||
updatedAt: integer("updated_at").notNull(),
|
updatedAt: integer("updated_at").notNull(),
|
||||||
});
|
});
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Coordinator index of TaskActor instances.
|
||||||
|
* Fast taskId → repoId lookup so the organization can route requests
|
||||||
|
* to the correct RepositoryActor without scanning all repos.
|
||||||
|
*/
|
||||||
export const taskLookup = sqliteTable("task_lookup", {
|
export const taskLookup = sqliteTable("task_lookup", {
|
||||||
taskId: text("task_id").notNull().primaryKey(),
|
taskId: text("task_id").notNull().primaryKey(),
|
||||||
repoId: text("repo_id").notNull(),
|
repoId: text("repo_id").notNull(),
|
||||||
});
|
});
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Materialized sidebar projection maintained by task actors.
|
* Coordinator index of TaskActor instances — materialized sidebar projection.
|
||||||
* The source of truth still lives on each task actor; this table exists so
|
* Task actors push summary updates to the organization actor via
|
||||||
* organization reads can stay local and avoid fan-out across child actors.
|
* applyTaskSummaryUpdate(). Source of truth lives on each TaskActor;
|
||||||
|
* this table exists so organization reads stay local without fan-out.
|
||||||
*/
|
*/
|
||||||
export const taskSummaries = sqliteTable("task_summaries", {
|
export const taskSummaries = sqliteTable("task_summaries", {
|
||||||
taskId: text("task_id").notNull().primaryKey(),
|
taskId: text("task_id").notNull().primaryKey(),
|
||||||
|
|
@ -87,6 +98,11 @@ export const invoices = sqliteTable("invoices", {
|
||||||
createdAt: integer("created_at").notNull(),
|
createdAt: integer("created_at").notNull(),
|
||||||
});
|
});
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Coordinator index of AuthUserActor instances — routes session token → userId.
|
||||||
|
* Better Auth adapter uses this to resolve which user actor to query
|
||||||
|
* before the user identity is known.
|
||||||
|
*/
|
||||||
export const authSessionIndex = sqliteTable("auth_session_index", {
|
export const authSessionIndex = sqliteTable("auth_session_index", {
|
||||||
sessionId: text("session_id").notNull().primaryKey(),
|
sessionId: text("session_id").notNull().primaryKey(),
|
||||||
sessionToken: text("session_token").notNull(),
|
sessionToken: text("session_token").notNull(),
|
||||||
|
|
@ -95,12 +111,20 @@ export const authSessionIndex = sqliteTable("auth_session_index", {
|
||||||
updatedAt: integer("updated_at").notNull(),
|
updatedAt: integer("updated_at").notNull(),
|
||||||
});
|
});
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Coordinator index of AuthUserActor instances — routes email → userId.
|
||||||
|
* Better Auth adapter uses this to resolve which user actor to query.
|
||||||
|
*/
|
||||||
export const authEmailIndex = sqliteTable("auth_email_index", {
|
export const authEmailIndex = sqliteTable("auth_email_index", {
|
||||||
email: text("email").notNull().primaryKey(),
|
email: text("email").notNull().primaryKey(),
|
||||||
userId: text("user_id").notNull(),
|
userId: text("user_id").notNull(),
|
||||||
updatedAt: integer("updated_at").notNull(),
|
updatedAt: integer("updated_at").notNull(),
|
||||||
});
|
});
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Coordinator index of AuthUserActor instances — routes OAuth account → userId.
|
||||||
|
* Better Auth adapter uses this to resolve which user actor to query.
|
||||||
|
*/
|
||||||
export const authAccountIndex = sqliteTable("auth_account_index", {
|
export const authAccountIndex = sqliteTable("auth_account_index", {
|
||||||
id: text("id").notNull().primaryKey(),
|
id: text("id").notNull().primaryKey(),
|
||||||
providerId: text("provider_id").notNull(),
|
providerId: text("provider_id").notNull(),
|
||||||
|
|
|
||||||
|
|
@ -8,6 +8,13 @@ export const repoMeta = sqliteTable("repo_meta", {
|
||||||
updatedAt: integer("updated_at").notNull(),
|
updatedAt: integer("updated_at").notNull(),
|
||||||
});
|
});
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Coordinator index of TaskActor instances.
|
||||||
|
* The repository actor is the coordinator for tasks. Each row maps a
|
||||||
|
* taskId to its branch name. Used for branch conflict checking and
|
||||||
|
* task-by-branch lookups. Rows are inserted at task creation and
|
||||||
|
* updated on branch rename.
|
||||||
|
*/
|
||||||
export const taskIndex = sqliteTable("task_index", {
|
export const taskIndex = sqliteTable("task_index", {
|
||||||
taskId: text("task_id").notNull().primaryKey(),
|
taskId: text("task_id").notNull().primaryKey(),
|
||||||
branchName: text("branch_name"),
|
branchName: text("branch_name"),
|
||||||
|
|
|
||||||
|
|
@ -37,6 +37,11 @@ export const taskRuntime = sqliteTable(
|
||||||
(table) => [check("task_runtime_singleton_id_check", sql`${table.id} = 1`)],
|
(table) => [check("task_runtime_singleton_id_check", sql`${table.id} = 1`)],
|
||||||
);
|
);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Coordinator index of SandboxInstanceActor instances.
|
||||||
|
* Tracks all sandbox instances provisioned for this task. Only one
|
||||||
|
* is active at a time (referenced by taskRuntime.activeSandboxId).
|
||||||
|
*/
|
||||||
export const taskSandboxes = sqliteTable("task_sandboxes", {
|
export const taskSandboxes = sqliteTable("task_sandboxes", {
|
||||||
sandboxId: text("sandbox_id").notNull().primaryKey(),
|
sandboxId: text("sandbox_id").notNull().primaryKey(),
|
||||||
sandboxProviderId: text("sandbox_provider_id").notNull(),
|
sandboxProviderId: text("sandbox_provider_id").notNull(),
|
||||||
|
|
@ -48,6 +53,12 @@ export const taskSandboxes = sqliteTable("task_sandboxes", {
|
||||||
updatedAt: integer("updated_at").notNull(),
|
updatedAt: integer("updated_at").notNull(),
|
||||||
});
|
});
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Coordinator index of workbench sessions within this task.
|
||||||
|
* The task actor is the coordinator for sessions. Each row holds session
|
||||||
|
* metadata, model, status, transcript, and draft state. Sessions are
|
||||||
|
* sub-entities of the task — no separate session actor in the DB.
|
||||||
|
*/
|
||||||
export const taskWorkbenchSessions = sqliteTable("task_workbench_sessions", {
|
export const taskWorkbenchSessions = sqliteTable("task_workbench_sessions", {
|
||||||
sessionId: text("session_id").notNull().primaryKey(),
|
sessionId: text("session_id").notNull().primaryKey(),
|
||||||
sandboxSessionId: text("sandbox_session_id"),
|
sandboxSessionId: text("sandbox_session_id"),
|
||||||
|
|
|
||||||
|
|
@ -386,11 +386,24 @@ async function getTaskSandboxRuntime(
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
async function ensureSandboxRepo(c: any, sandbox: any, record: any): Promise<void> {
|
/**
|
||||||
|
* Track whether the sandbox repo has been fully prepared (cloned + fetched + checked out)
|
||||||
|
* for the current actor lifecycle. Subsequent calls can skip the expensive `git fetch`
|
||||||
|
* when `skipFetch` is true (used by sendWorkbenchMessage to avoid blocking on every prompt).
|
||||||
|
*/
|
||||||
|
let sandboxRepoPrepared = false;
|
||||||
|
|
||||||
|
async function ensureSandboxRepo(c: any, sandbox: any, record: any, opts?: { skipFetchIfPrepared?: boolean }): Promise<void> {
|
||||||
if (!record.branchName) {
|
if (!record.branchName) {
|
||||||
throw new Error("cannot prepare a sandbox repo before the task branch exists");
|
throw new Error("cannot prepare a sandbox repo before the task branch exists");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If the repo was already prepared and the caller allows skipping fetch, just return.
|
||||||
|
// The clone, fetch, and checkout already happened on a prior call.
|
||||||
|
if (opts?.skipFetchIfPrepared && sandboxRepoPrepared) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
const auth = await resolveOrganizationGithubAuth(c, c.state.organizationId);
|
const auth = await resolveOrganizationGithubAuth(c, c.state.organizationId);
|
||||||
const repository = await getOrCreateRepository(c, c.state.organizationId, c.state.repoId, c.state.repoRemote);
|
const repository = await getOrCreateRepository(c, c.state.organizationId, c.state.repoId, c.state.repoRemote);
|
||||||
const metadata = await repository.getRepositoryMetadata({});
|
const metadata = await repository.getRepositoryMetadata({});
|
||||||
|
|
@ -426,6 +439,8 @@ async function ensureSandboxRepo(c: any, sandbox: any, record: any): Promise<voi
|
||||||
if ((result.exitCode ?? 0) !== 0) {
|
if ((result.exitCode ?? 0) !== 0) {
|
||||||
throw new Error(`sandbox repo preparation failed (${result.exitCode ?? 1}): ${[result.stdout, result.stderr].filter(Boolean).join("")}`);
|
throw new Error(`sandbox repo preparation failed (${result.exitCode ?? 1}): ${[result.stdout, result.stderr].filter(Boolean).join("")}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
sandboxRepoPrepared = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
async function executeInSandbox(
|
async function executeInSandbox(
|
||||||
|
|
@ -1191,7 +1206,9 @@ export async function sendWorkbenchMessage(c: any, sessionId: string, text: stri
|
||||||
const meta = requireSendableSessionMeta(await readSessionMeta(c, sessionId), sessionId);
|
const meta = requireSendableSessionMeta(await readSessionMeta(c, sessionId), sessionId);
|
||||||
const record = await ensureWorkbenchSeeded(c);
|
const record = await ensureWorkbenchSeeded(c);
|
||||||
const runtime = await getTaskSandboxRuntime(c, record);
|
const runtime = await getTaskSandboxRuntime(c, record);
|
||||||
await ensureSandboxRepo(c, runtime.sandbox, record);
|
// Skip git fetch on subsequent messages — the repo was already prepared during session
|
||||||
|
// creation. This avoids a 5-30s network round-trip to GitHub on every prompt.
|
||||||
|
await ensureSandboxRepo(c, runtime.sandbox, record, { skipFetchIfPrepared: true });
|
||||||
const prompt = [text.trim(), ...attachments.map((attachment: any) => `@ ${attachment.filePath}:${attachment.lineNumber}\n${attachment.lineContent}`)].filter(
|
const prompt = [text.trim(), ...attachments.map((attachment: any) => `@ ${attachment.filePath}:${attachment.lineNumber}\n${attachment.lineContent}`)].filter(
|
||||||
Boolean,
|
Boolean,
|
||||||
);
|
);
|
||||||
|
|
|
||||||
|
|
@ -671,7 +671,7 @@ export const Sidebar = memo(function Sidebar({
|
||||||
const isRunning = task.sessions.some((s) => s.status === "running");
|
const isRunning = task.sessions.some((s) => s.status === "running");
|
||||||
const isProvisioning =
|
const isProvisioning =
|
||||||
!isPullRequestItem &&
|
!isPullRequestItem &&
|
||||||
(String(task.status).startsWith("init_") ||
|
((String(task.status).startsWith("init_") && task.status !== "init_complete") ||
|
||||||
task.status === "new" ||
|
task.status === "new" ||
|
||||||
task.sessions.some((s) => s.status === "pending_provision" || s.status === "pending_session_create"));
|
task.sessions.some((s) => s.status === "pending_provision" || s.status === "pending_session_create"));
|
||||||
const hasUnread = task.sessions.some((s) => s.unread);
|
const hasUnread = task.sessions.some((s) => s.unread);
|
||||||
|
|
@ -810,11 +810,7 @@ export const Sidebar = memo(function Sidebar({
|
||||||
|
|
||||||
if (item.type === "task-drop-zone") {
|
if (item.type === "task-drop-zone") {
|
||||||
const { repository, taskCount } = item;
|
const { repository, taskCount } = item;
|
||||||
const isDropTarget =
|
const isDropTarget = drag?.type === "task" && drag.repositoryId === repository.id && drag.overIdx === taskCount && drag.fromIdx !== taskCount;
|
||||||
drag?.type === "task" &&
|
|
||||||
drag.repositoryId === repository.id &&
|
|
||||||
drag.overIdx === taskCount &&
|
|
||||||
drag.fromIdx !== taskCount;
|
|
||||||
return (
|
return (
|
||||||
<div
|
<div
|
||||||
key={item.key}
|
key={item.key}
|
||||||
|
|
@ -851,8 +847,7 @@ export const Sidebar = memo(function Sidebar({
|
||||||
}
|
}
|
||||||
|
|
||||||
if (item.type === "repository-drop-zone") {
|
if (item.type === "repository-drop-zone") {
|
||||||
const isDropTarget =
|
const isDropTarget = drag?.type === "repository" && drag.overIdx === item.repositoryCount && drag.fromIdx !== item.repositoryCount;
|
||||||
drag?.type === "repository" && drag.overIdx === item.repositoryCount && drag.fromIdx !== item.repositoryCount;
|
|
||||||
return (
|
return (
|
||||||
<div
|
<div
|
||||||
key={item.key}
|
key={item.key}
|
||||||
|
|
|
||||||
|
|
@ -34,10 +34,13 @@ describe("describeTaskState", () => {
|
||||||
});
|
});
|
||||||
|
|
||||||
describe("isProvisioningTaskStatus", () => {
|
describe("isProvisioningTaskStatus", () => {
|
||||||
it("treats all init states as provisioning", () => {
|
it("treats in-progress init states as provisioning", () => {
|
||||||
expect(isProvisioningTaskStatus("init_bootstrap_db")).toBe(true);
|
expect(isProvisioningTaskStatus("init_bootstrap_db")).toBe(true);
|
||||||
expect(isProvisioningTaskStatus("init_ensure_name")).toBe(true);
|
expect(isProvisioningTaskStatus("init_ensure_name")).toBe(true);
|
||||||
expect(isProvisioningTaskStatus("init_complete")).toBe(true);
|
});
|
||||||
|
|
||||||
|
it("does not treat init_complete as provisioning (task is ready)", () => {
|
||||||
|
expect(isProvisioningTaskStatus("init_complete")).toBe(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
it("does not treat steady-state or terminal states as provisioning", () => {
|
it("does not treat steady-state or terminal states as provisioning", () => {
|
||||||
|
|
|
||||||
|
|
@ -10,12 +10,7 @@ export interface TaskStateDescriptor {
|
||||||
|
|
||||||
export function isProvisioningTaskStatus(status: TaskDisplayStatus | null | undefined): boolean {
|
export function isProvisioningTaskStatus(status: TaskDisplayStatus | null | undefined): boolean {
|
||||||
return (
|
return (
|
||||||
status === "new" ||
|
status === "new" || status === "init_bootstrap_db" || status === "init_enqueue_provision" || status === "init_ensure_name" || status === "init_assert_name"
|
||||||
status === "init_bootstrap_db" ||
|
|
||||||
status === "init_enqueue_provision" ||
|
|
||||||
status === "init_ensure_name" ||
|
|
||||||
status === "init_assert_name" ||
|
|
||||||
status === "init_complete"
|
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue