update docs

This commit is contained in:
Harivansh Rathi 2026-04-01 16:43:25 +00:00
parent c5be520772
commit 5bc24fa99d
11 changed files with 591 additions and 632 deletions

View file

@ -3,61 +3,76 @@
This file is the canonical contract for the repository.
If the planning docs, scaffold code, or future tasks disagree, this file and
[`packages/contracts`](../packages/contracts)
win.
[`packages/contracts`](../packages/contracts) win.
## The single first task
## Product default
Before splitting work across agents, do one foundation task:
betterNAS is self-hosted first.
- scaffold the four product parts
- lock the shared contracts
- define one end-to-end verification loop
- enforce clear ownership boundaries
For the current product shape, the user should be able to run the whole stack on
their NAS machine:
That first task should leave the repo in a state where later work can be
parallelized without interface drift.
- `node-service` serves the real files over WebDAV
- `control-server` owns auth, nodes, exports, grants, and mount profiles
- `web control plane` is the browser UI over the control-server
- the local device mounts an issued WebDAV URL in Finder
## The four parts
Optional hosted deployments can come later. Optional Nextcloud integration can
come later.
## The core system
```text
betterNAS canonical contract
betterNAS canonical contract
[2] control plane
+-----------------------------------+
| system of record |
| users / devices / nodes / grants |
| mount profiles / cloud profiles |
+---------+---------------+---------+
| |
control/API | | cloud adapter
v v
[1] NAS node [4] cloud / web layer
+-------------------+ +----------------------+
| WebDAV + node | | Nextcloud adapter |
| real file bytes | | browser / mobile |
+---------+---------+ +----------+-----------+
| ^
| mount profile |
v |
[3] local device --------------+
+----------------------+
| Finder mount/helper |
| native user entry |
+----------------------+
self-hosted on user's NAS
+--------------------------------------+
| [2] control-server |
| system of record |
| auth / nodes / exports / grants |
| mount sessions / audit |
+------------------+-------------------+
|
v
+--------------------------------------+
| [1] node-service |
| WebDAV export runtime |
| real file bytes |
+------------------+-------------------+
^
|
+------------------+-------------------+
| [3] web control plane |
| onboarding / management / mount UX |
+------------------+-------------------+
^
|
user browser
user local device
|
+-----------------------------------------------> Finder mount
via issued WebDAV URL
[4] optional cloud adapter
|
+--> secondary browser/mobile/share layer
not part of the core mount path
```
## Non-negotiable rules
1. The control plane is the system of record.
2. File bytes should flow as directly as possible between the NAS node and the
local device.
3. The control plane should issue policy, grants, and profiles. It should not
become the default file proxy.
4. The NAS node should serve WebDAV directly whenever possible.
5. The local device consumes mount profiles. It does not hardcode infra details.
6. The cloud/web layer is optional and secondary. Nextcloud is an adapter, not
the product center.
1. `control-server` is the system of record.
2. `node-service` serves the bytes.
3. `web control plane` is a UI over `control-server`, not a second policy
backend.
4. The main data path should be `local device <-> node-service` whenever
possible.
5. `control-server` should issue access, grants, and mount profiles. It should
not become the default file proxy.
6. The self-hosted stack should work without Nextcloud.
7. Nextcloud, if used, is an optional adapter and secondary surface.
## Canonical sources of truth
@ -67,7 +82,7 @@ Use these in this order:
for boundaries, ownership, and delivery rules
2. [`packages/contracts`](../packages/contracts)
for machine-readable types, schemas, and route constants
3. the part docs for local detail:
3. the part docs:
- [`docs/01-nas-node.md`](./01-nas-node.md)
- [`docs/02-control-plane.md`](./02-control-plane.md)
- [`docs/03-local-device.md`](./03-local-device.md)
@ -84,31 +99,33 @@ The monorepo is split into these primary implementation lanes:
- [`apps/nextcloud-app`](../apps/nextcloud-app)
- [`packages/contracts`](../packages/contracts)
Every parallel task should primarily stay inside one of those lanes unless it is
an explicit contract task.
The first three are core. `apps/nextcloud-app` is optional and should not drive
the main architecture.
## The contract surface we need first
The first shared contract set should cover only the seams that let all four
parts exist at once.
The first shared contract set should cover only the seams needed for the
self-hosted mount flow.
### NAS node -> control plane
### Node-service -> control-server
- node registration
- node heartbeat
- export inventory
### Local device -> control plane
### Web control plane -> control-server
- list allowed exports
- auth/session bootstrapping
- list nodes and exports
- issue mount profile
- issue share or cloud profile later
### Cloud/web layer -> control plane
### Local device -> control-server
- issue cloud profile
- read export metadata
- fetch mount instructions
- receive issued WebDAV URL and credentials or token material
### Control plane internal
### Control-server internal
- health
- version
@ -117,57 +134,61 @@ parts exist at once.
- `StorageExport`
- `AccessGrant`
- `MountProfile`
- `CloudProfile`
- `AuditEvent`
## Parallel work boundaries
Each area gets an owner and a narrow write surface.
| Part | Owns | May read | Must not own |
| ----------------- | ------------------------------------------------ | ------------------------------ | ------------------------------ |
| node-service | NAS runtime, WebDAV serving, export reporting | contracts, control-server docs | product policy |
| control-server | domain model, grants, profile issuance, registry | everything | direct file serving by default |
| web control plane | onboarding, node/export management, mount UX | contracts, control-server docs | source of truth |
| optional adapter | Nextcloud mapping and cloud surfaces | contracts, control-server docs | core mount path |
| Part | Owns | May read | Must not own |
| --------------- | ------------------------------------------------ | ----------------------------- | ------------------------------ |
| NAS node | node runtime, export reporting, WebDAV config | contracts, control-plane docs | product policy |
| Control plane | domain model, grants, profile issuance, registry | everything | direct file serving by default |
| Local device | mount UX, helper flows, credential handling | contracts, control-plane docs | access policy |
| Cloud/web layer | Nextcloud adapter, browser/mobile integration | contracts, control-plane docs | source of truth |
The only shared write surface across teams should be:
The shared write surface across parts should stay narrow:
- [`packages/contracts`](../packages/contracts)
- this file when the architecture contract changes
- this file when architecture changes
## Verification loop
This is the first loop every scaffold and agent should target.
This is the main loop every near-term task should support.
```text
[1] mock or real NAS node exposes a WebDAV export
-> [2] control plane registers the node and export
-> [3] local device asks for a mount profile
-> [3] local device receives a WebDAV mount URL
-> user can mount the export in Finder
-> [4] optional cloud/web layer can expose the same export in cloud mode
[node-service]
serves a WebDAV export
|
v
[control-server]
registers the node and export
issues a mount profile
|
v
[web control plane]
shows the export and mount action
|
v
[local device]
mounts the issued WebDAV URL in Finder
```
If a task does not help one of those steps become real, it is probably too
early.
If a task does not make one of those steps more real, it is probably too early.
## Definition of done for the foundation scaffold
## Definition of done for the current foundation
The initial scaffold is complete when:
The current foundation is in good shape when:
- all four parts have a documented entry point
- the control plane can represent nodes, exports, grants, and profiles
- the contracts package exports the first shared shapes and schemas
- local verification can prove the mount-profile loop end to end
- future agents can work inside one part without inventing new interfaces
- the self-hosted stack boots locally
- the control-server can represent nodes, exports, grants, and mount profiles
- the node-service serves a real WebDAV export
- the web control plane can expose the mount flow
- a local Mac can mount the export in Finder
## Rules for future tasks and agents
1. No part may invent private request or response shapes for shared flows.
2. Contract changes must update
[`packages/contracts`](../packages/contracts)
2. Contract changes must update [`packages/contracts`](../packages/contracts)
first.
3. Architecture changes must update this file in the same change.
4. Additive contract changes are preferred over breaking ones.
5. New tasks should target one part at a time unless they are explicitly
contract tasks.
5. Prioritize the self-hosted mount loop before optional cloud/mobile work.