update docs

This commit is contained in:
Harivansh Rathi 2026-04-01 16:43:25 +00:00
parent c5be520772
commit 5bc24fa99d
11 changed files with 591 additions and 632 deletions

110
README.md
View file

@ -1,41 +1,83 @@
# betterNAS
- control-plane owns policy and identity (decides)
- node-agent owns file serving (serves)
- web owns UX (consumer facing)
- nextcloud-app is optional adapter only for cloud storage in s3 n shit
betterNAS is a self-hostable WebDAV stack for mounting NAS exports in Finder.
## Monorepo
The default product shape is:
- `apps/web`: Next.js control-plane UI
- `apps/control-plane`: Go control-plane service
- `apps/node-agent`: Go NAS runtime / WebDAV node
- `apps/nextcloud-app`: optional Nextcloud adapter
- `packages/contracts`: canonical shared contracts
- `packages/ui`: shared React UI
- `infra/docker`: local Docker runtime
- `node-service` serves the real files from the NAS over WebDAV
- `control-server` owns auth, nodes, exports, grants, and mount profile issuance
- `web control plane` lets the user manage the NAS and get mount instructions
- `macOS client` starts as native Finder WebDAV mounting, with a thin helper later
The root planning and delegation guide lives in [skeleton.md](./skeleton.md).
For now, the whole stack should be able to run on the user's NAS device.
## Current repo shape
- `apps/node-agent`
- NAS-side Go runtime and WebDAV server
- `apps/control-plane`
- Go backend for auth, registry, and mount profile issuance
- `apps/web`
- Next.js web control plane
- `apps/nextcloud-app`
- optional Nextcloud adapter, not the product center
- `packages/contracts`
- canonical shared contracts
- `infra/docker`
- self-hosted local stack
The main planning docs are:
- [docs/architecture.md](./docs/architecture.md)
- [skeleton.md](./skeleton.md)
- [docs/05-build-plan.md](./docs/05-build-plan.md)
## Default runtime model
```text
self-hosted betterNAS on the user's NAS
+------------------------------+
| web control plane |
| Next.js UI |
+--------------+---------------+
|
v
+------------------------------+
| control-server |
| auth / nodes / exports |
| grants / mount profiles |
+--------------+---------------+
|
v
+------------------------------+
| node-service |
| WebDAV + export runtime |
| real NAS bytes |
+------------------------------+
user Mac
|
+--> browser -> web control plane
|
+--> Finder -> WebDAV mount URL from control-server
```
## Verify
Run the repo acceptance loop with:
Static verification:
```bash
pnpm verify
```
## Runtime loop
Bootstrap clone-local runtime settings with:
Bootstrap clone-local runtime settings:
```bash
pnpm agent:bootstrap
```
If `.env.agent` is missing, bootstrap writes clone-local defaults for this checkout.
Bring the stack up, verify it, and tear it down with:
Bring the self-hosted stack up, verify it, and tear it down:
```bash
pnpm stack:up
@ -43,16 +85,30 @@ pnpm stack:verify
pnpm stack:down --volumes
```
## Agent loop
Run the full static and integration loop with:
Run the full loop:
```bash
pnpm agent:verify
```
Create or refresh the sibling agent clones with:
## Current end-to-end slice
```bash
pnpm clones:setup
```
The first proven slice is:
1. boot the stack with `pnpm stack:up`
2. verify it with `pnpm stack:verify`
3. get the WebDAV mount URL
4. mount it in Finder
If the stack is running on a remote machine, tunnel the WebDAV port first, then
use Finder `Connect to Server` with the tunneled URL.
## Product boundary
The default betterNAS product is self-hosted and WebDAV-first.
Nextcloud remains optional and secondary:
- useful later for browser/mobile/share surfaces
- not required for the core mount flow
- not the system of record

10
TODO.md
View file

@ -5,7 +5,11 @@
- [x] Add root formatting, verification, and Go formatting rails.
- [x] Add hard boundary checks so apps and packages cannot drift across lanes with private imports.
- [x] Make the first contract-backed mount loop real: node registration, export inventory, mount profile issuance, and a Finder-mountable WebDAV export.
- [ ] Add a manual E2E runbook for remote-host WebDAV testing from a Mac over SSH tunnel.
- [x] Prove the first manual remote-host WebDAV mount from a Mac over SSH tunnel.
- [ ] Surface exports and issued mount URLs in the web control plane.
- [ ] Define the Nix/module shape for installing the node agent and export runtime on a NAS host.
- [ ] Decide whether the node agent should self-register or stay control-plane registered by bootstrap tooling.
- [ ] Add durable control-server storage for nodes, exports, grants, and mount profiles.
- [ ] Define the self-hosted deployment shape for the full stack on a NAS device.
- [ ] Define the Nix/module shape for installing the node-service on a NAS host.
- [ ] Decide whether the node-service should self-register or stay bootstrap-registered.
- [ ] Decide whether browser file viewing belongs in V1 web control plane or later.
- [ ] Define if and when the optional Nextcloud adapter comes back into scope.

View file

@ -1,51 +1,51 @@
# Control
This clone is the main repo.
This repo is the coordination and implementation ground for betterNAS.
Use it for:
- shared contracts
- repo guardrails
- architecture and planning docs
- runtime scripts
- integration verification
- architecture and coordination
- stack verification
- implementation of the self-hosted stack
Planned clone layout:
## Current product focus
```text
/home/rathi/Documents/GitHub/betterNAS/
betterNAS
betterNAS-runtime
betterNAS-control
betterNAS-node
```
The default betterNAS product is:
Clone roles:
- self-hosted on the user's NAS
- WebDAV-first
- Finder-mountable
- managed through a web control plane
- `betterNAS`
- main coordination repo
- owns contracts, scripts, and shared verification rules
- `betterNAS-runtime`
- owns Docker Compose, stack env, readiness checks, and end-to-end runtime verification
- `betterNAS-control`
- owns the Go control plane and contract-backed API behavior
- `betterNAS-node`
- owns the node agent, WebDAV serving, and NAS-side registration/export behavior
The main parts are:
Rules:
- `node-service`
- `apps/node-agent`
- `control-server`
- `apps/control-plane`
- `web control plane`
- `apps/web`
- `optional cloud adapter`
- `apps/nextcloud-app`
## Rules
- shared interface changes land in `packages/contracts` first
- runtime verification must stay green in the main repo
- feature agents should stay inside their assigned clone unless a contract change is required
- `docs/architecture.md` is the canonical architecture contract
- the self-hosted mount flow is the critical path
- optional Nextcloud work must not drive the main architecture
Agent command surface:
## Command surface
- main repo creates or refreshes sibling clones with `pnpm clones:setup`
- each clone bootstraps itself with `pnpm agent:bootstrap`
- each clone runs the full loop with `pnpm agent:verify`
Agent prompts live in:
- `docs/agents/runtime-agent.md`
- `docs/agents/control-plane-agent.md`
- `docs/agents/node-agent.md`
- `pnpm verify`
- static verification
- `pnpm stack:up`
- boot the self-hosted stack
- `pnpm stack:verify`
- verify the working stack
- `pnpm stack:down --volumes`
- tear the stack down cleanly
- `pnpm agent:verify`
- bootstrap, verify, boot, and stack-verify in one loop

View file

@ -1,6 +1,7 @@
# betterNAS Part 1: NAS Node
This document describes the software that runs on the actual NAS machine, VM, or workstation that owns the files.
This document describes the software that runs on the actual NAS machine, VM,
or workstation that owns the files.
## What it is
@ -8,10 +9,11 @@ The NAS node is the machine that actually has the storage.
It should run:
- a WebDAV server
- a small betterNAS node agent
- declarative config via Nix
- optional tunnel or relay connection if the machine is not directly reachable
- the `node-service`
- a WebDAV server surface
- export configuration
- optional enrollment or heartbeat back to `control-server`
- later, a reproducible install path such as Docker or Nix
It should expose one or more storage exports such as:
@ -24,47 +26,38 @@ It should expose one or more storage exports such as:
- serves the real file bytes
- exposes chosen directories over WebDAV
- registers itself with the control plane
- reports health, identity, and available exports
- optionally keeps an outbound connection alive for remote access
- reports identity, health, and exports to `control-server`
- stays simple enough to self-host on a single NAS box
## What it should not do
- own user-facing product logic
- decide permissions by itself
- become the system of record for shares, devices, or policies
- own product policy
- decide user access rules by itself
- become the system of record for users, grants, or shares
## Diagram
```text
betterNAS system
self-hosted betterNAS stack
local device <-------> control plane <-------> cloud/web layer
| | |
| | |
+-------------------------+--------------------------+
|
v
+---------------------------+
| [THIS DOC] NAS node |
|---------------------------|
| WebDAV server |
| node agent |
| exported directories |
| optional tunnel/relay |
+---------------------------+
web control plane ---> control-server ---> [THIS DOC] node-service
^ |
| |
+---------------- user browser ----------+
local Mac ---------------- Finder mount ----------+
```
## Core decisions
- The NAS node should be where WebDAV is served from whenever possible.
- The control plane should configure access, but file bytes should flow from the node to the user device as directly as possible.
- The node should be installable with a Nix module or flake so setup is reproducible.
- The node should be installable as one boring runtime on the user's machine.
- The node should expose exports, not product semantics.
## TODO
- Choose the WebDAV server we will standardize on for the node.
- Define the node agent responsibilities and API back to the control plane.
- Define the storage export model: path, label, capacity, tags, protocol support.
- Define direct-access vs relayed-access behavior.
- Define how the node connects to the cloud/web layer for optional Nextcloud integration.
- Define the self-hosted install shape: Docker first, Nix second, or both.
- Define the node identity and enrollment model.
- Define the storage export model: path, label, tags, permissions, capacity.
- Define when the node self-registers vs when bootstrap tooling registers it.
- Define direct-access vs relay-access behavior for remote use.

View file

@ -1,10 +1,11 @@
# betterNAS Part 2: Control Plane
# betterNAS Part 2: Control Server
This document describes the main backend that owns product semantics and coordinates the rest of the system.
This document describes the main backend that owns product semantics and
coordinates the rest of the system.
## What it is
The control plane is the source of truth for betterNAS.
`control-server` is the source of truth for betterNAS.
It should own:
@ -14,8 +15,7 @@ It should own:
- storage exports
- access grants
- mount profiles
- cloud access profiles
- audit events
- later, share flows and audit events
## What it does
@ -23,35 +23,32 @@ It should own:
- tracks which NAS nodes exist
- decides who can access which export
- issues mount instructions to local devices
- coordinates optional cloud/web access
- stores the operational model of the whole product
- drives the web control plane
- stores the operational model of the product
## What it should not do
- proxy file bytes unless absolutely necessary
- become a bottleneck in the data path
- depend on Nextcloud as its system of record
- proxy file bytes by default
- become the only data path between the Mac and the NAS
- depend on Nextcloud as its source of truth
## Diagram
```text
betterNAS system
self-hosted betterNAS stack
NAS node <---------> [THIS DOC] control plane <---------> local device
| | |
| | |
+---------------------------+-----------------------+-----------+
|
v
cloud/web layer
node-service <--------> [THIS DOC] control-server <--------> web control plane
^ |
| |
+----------- Finder mount flow -+
```
## Core decisions
- The control plane is the product brain.
- It should own policy and registry, not storage bytes.
- It should stay standalone even if it integrates with Nextcloud.
- It should issue access decisions, not act like a file server.
- `control-server` is the product brain.
- It owns policy and registry, not storage bytes.
- It should stay deployable on the user's NAS in the default product shape.
- The web UI should remain a consumer of this service, not a second backend.
## Suggested first entities
@ -61,13 +58,12 @@ It should own:
- `StorageExport`
- `AccessGrant`
- `MountProfile`
- `CloudProfile`
- `AuditEvent`
## TODO
- Define the first real domain model and database schema.
- Define auth between user device, NAS node, and control plane.
- Define the API for mount profiles and access grants.
- Define how the control plane tells the cloud/web layer what to expose.
- Define direct-access vs relay behavior for unreachable NAS nodes.
- Define the first durable database schema.
- Define auth between user browser, user device, NAS node, and control-server.
- Define the API for node registration, export inventory, and mount issuance.
- Define how mount tokens or credentials are issued and rotated.
- Define what optional cloud/share integration looks like later.

View file

@ -1,19 +1,21 @@
# betterNAS Part 3: Local Device
This document describes the software and user experience on the user's Mac or other local device.
This document describes the software and user experience on the user's Mac or
other local device.
## What it is
The local device layer is how a user actually mounts and uses their NAS.
It can start simple:
It should start simple:
- Finder + WebDAV mount
- manual `Connect to Server`
- browser opens the web control plane
- user gets a WebDAV mount URL
- Finder mounts the export
It can later grow into:
- a small desktop helper
- a small helper app
- one-click mount flows
- auto-mount at login
- status and reconnect behavior
@ -21,52 +23,50 @@ It can later grow into:
## What it does
- authenticates the user to betterNAS
- fetches allowed mount profiles from the control plane
- fetches allowed mount profiles from `control-server`
- mounts approved storage exports locally
- gives the user a native-feeling way to browse files
## What it should not do
- invent its own permissions model
- hardcode NAS endpoints outside the control plane
- become tightly coupled to Nextcloud
- hardcode node endpoints outside the control-server
- depend on the optional cloud adapter for the core mount flow
## Diagram
```text
betterNAS system
self-hosted betterNAS stack
NAS node <---------> control plane <---------> [THIS DOC] local device
| | |
| | |
+---------------------------+-----------------------+-----------+
|
v
cloud/web layer
node-service <--------> control-server <--------> web control plane
^ ^
| |
+------------- [THIS DOC] local device ---------+
browser + Finder
```
## Core decisions
- V1 can rely on native Finder WebDAV mounting.
- A lightweight helper app is likely enough before a full custom client.
- The local device should consume mount profiles, not raw infrastructure details.
- V1 relies on native Finder WebDAV mounting.
- The web UI should be enough to get the user to a mountable URL.
- A lightweight helper app is likely enough before a full native client.
## User modes
### Mount mode
- user mounts a NAS export into Finder
- user mounts a NAS export in Finder
- files are browsed as a mounted remote disk
### Cloud mode
### Browser mode
- user accesses the same storage through browser/mobile/cloud surfaces
- this is not the same as a mounted filesystem
- user manages the NAS and exports in the web control plane
- optional later: browse files in the browser
## TODO
- Define the mount profile format the control plane returns.
- Decide what the first local UX is: manual Finder flow, helper app, or both.
- Define credential storage and Keychain behavior.
- Define auto-mount, reconnect, and offline expectations.
- Define how the local device hands off to the cloud/web layer when mount mode is not enough.
- Define the mount profile format returned by `control-server`.
- Decide whether the first UX is manual Finder flow, helper app, or both.
- Define credential handling and Keychain behavior.
- Define reconnect and auto-mount expectations.
- Define what later native client work is actually worth doing.

View file

@ -1,69 +1,71 @@
# betterNAS Part 4: Cloud / Web Layer
# betterNAS Part 4: Web Control Plane and Optional Cloud Layer
This document describes the optional browser, mobile, and cloud-drive style access layer.
This document describes the browser UI that users interact with, plus the
optional cloud adapter layer that may exist later.
## What it is
The cloud/web layer is the part of betterNAS that makes storage accessible beyond local mounts.
The web control plane is part of the core product.
This is where we can reuse Nextcloud heavily for:
It should provide:
- browser file UI
- uploads and downloads
- sharing links
- WebDAV-based cloud access
- mobile reference behavior
- onboarding
- node and export management
- mount instructions
- sharing and browser file access later
An optional cloud adapter may later provide:
- Nextcloud-backed browser file UI
- mobile-friendly access
- share and link workflows
## What it does
- gives users a browser-based file experience
- supports sharing and link-based access
- gives us a cloud mode in addition to mount mode
- can act as a reference surface while the main betterNAS product grows
- gives users a browser-based entry point into betterNAS
- talks only to `control-server`
- exposes the mount flow cleanly
- optionally layers on cloud/mobile/share behavior later
## What it should not do
- own the product system of record
- become the only way users access storage
- swallow control-plane logic that should stay in betterNAS
- own product state separately from `control-server`
- become the only way users access their storage
- make the optional cloud adapter part of the core mount path
## Diagram
```text
betterNAS system
self-hosted betterNAS stack
NAS node <---------> control plane <---------> local device
| | |
| | |
+---------------------------+-----------------------+-----------+
|
v
+----------------------+
| [THIS DOC] cloud/web |
|----------------------|
| Nextcloud adapter |
| browser UI |
| sharing / mobile |
+----------------------+
node-service <--------> control-server <--------> [THIS DOC] web control plane
^ |
| |
+----------- Finder mount flow -+
optional later:
Nextcloud adapter / cloud/mobile/share surface
```
## Core decisions
- The cloud/web layer is optional but very high leverage.
- Nextcloud is a strong fit here because it already gives us file UI and sharing primitives.
- It should sit beside mount mode, not replace it.
- The web control plane is part of the core product now.
- Nextcloud is optional and secondary.
- The first user value is managing exports and getting a mount URL, not a full
browser file manager.
## Likely role of Nextcloud
## Likely near-term role of the web control plane
- browser-based file UI
- share and link management
- optional mobile and cloud-drive style access
- adapter over the same storage exports the control plane knows about
- sign in
- see available NAS nodes
- see available exports
- request mount instructions
- copy or launch the WebDAV mount flow
## TODO
- Decide whether Nextcloud is directly user-facing in v1 or mostly an adapter behind betterNAS.
- Define how storage exports from the NAS node appear in the cloud/web layer.
- Define how shares in this layer map back to control-plane access grants.
- Define what mobile access looks like in v1.
- Define branding and how much of the cloud/web layer stays stock vs customized.
- Define the first user-facing screens for nodes, exports, and mount actions.
- Define how auth/session works in the web UI.
- Decide whether browser file viewing is part of V1 or follows later.
- Decide whether Nextcloud remains an internal adapter or becomes user-facing.
- Define what sharing means before adding any cloud/mobile layer.

View file

@ -12,240 +12,151 @@ It answers four questions:
## The full system
```text
betterNAS build plan
self-hosted betterNAS
[2] control plane
[3] web control plane
+--------------------------------+
| API + policy + registry + UI |
+--------+---------------+-------+
| |
control/API | | cloud adapter
v v
[1] NAS node [4] cloud/web layer
+------------------+ +-------------------+
| WebDAV + agent | | Nextcloud adapter |
| real storage | | browser/mobile |
+---------+--------+ +---------+---------+
| ^
| mount profile |
v |
[3] local device ---------------+
+----------------------+
| Finder mount/helper |
| native user entry |
+----------------------+
| onboarding / management / UX |
+---------------+----------------+
|
v
[2] control-server
+--------------------------------+
| auth / nodes / exports |
| grants / mount profiles |
+---------------+----------------+
|
v
[1] node-service
+--------------------------------+
| WebDAV + export runtime |
| real storage |
+---------------+----------------+
^
|
[4] local device
+--------------------------------+
| browser + Finder mount |
+--------------------------------+
optional later:
- Nextcloud adapter
- hosted control plane
- hosted web UI
```
## The core rule
The control plane owns product semantics.
`control-server` owns product semantics.
The other three parts are execution surfaces:
- the NAS node serves storage
- the local device mounts and uses storage
- the cloud/web layer exposes storage through browser and mobile-friendly flows
- `node-service` serves storage
- `web control plane` exposes management and mount UX
- `local device` consumes the issued mount flow
## What we steal vs write
| Part | Steal first | Write ourselves |
| --------------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ |
| NAS node | NixOS/Nix module patterns, existing WebDAV servers | node agent, export model, node registration flow |
| Control plane | Go stdlib routing, pgx/sqlc, go-redis/asynq, OpenAPI codegen | product domain model, policy engine, mount/cloud APIs, registry |
| Local device | Finder WebDAV mount, macOS Keychain, later maybe launch agent patterns | helper app, mount profile handling, auto-mount UX |
| Cloud/web layer | Nextcloud server, Nextcloud shell app, Nextcloud share/file UI, Nextcloud mobile references | betterNAS integration layer, mapping between product model and Nextcloud, later branded UI |
| ----------------- | ----------------------------------------------------------------- | ------------------------------------------------------------------- |
| node-service | Go WebDAV primitives, Docker packaging, later Nix module patterns | node runtime, export model, node enrollment |
| control-server | Go stdlib routing, pgx/sqlc, Redis helpers, OpenAPI codegen | product domain model, policy engine, mount APIs, registry |
| web control plane | Next.js app conventions, shared UI primitives | product UI, onboarding, node/export flows, mount UX |
| local device | Finder WebDAV mount flow, macOS Keychain later | helper app or mount launcher later |
| optional adapter | Nextcloud server and app template | betterNAS mapping layer if we decide to keep a cloud/mobile surface |
## Where each part should start
## 1. NAS node
## 1. node-service
Start from:
- Nix flake / module
- a standard WebDAV server
- a very small agent process
- one Go binary
- one export root
- one WebDAV surface
- one deployable self-hosted runtime
Do not start by writing:
- custom storage protocol
- custom file server
- custom sync engine
- a custom storage protocol
- a custom sync engine
- a complex relay stack
The NAS node should be boring and reproducible.
## 2. Control plane
## 2. control-server
Start from:
- Go
- standard library routing first
- Postgres via `pgx` and `sqlc`
- Redis via `go-redis`
- OpenAPI-driven contracts
- standalone API mindset
- one API
- one durable data model
- node registration and mount profile issuance
Do not start by writing:
- microservices
- custom file transport
- a proxy that sits in the middle of every file transfer
- file proxying by default
- hosted-only assumptions
This is the first real thing we should build.
## 3. Local device
## 3. web control plane
Start from:
- native Finder `Connect to Server`
- WebDAV mount URLs issued by the control plane
- sign in
- list nodes and exports
- show mount URL and mount instructions
Do not start by writing:
- a large browser file manager
- a second backend hidden inside Next.js
## 4. local device
Start from:
- Finder `Connect to Server`
- WebDAV mount URL issued by `control-server`
Then later add:
- a lightweight helper app
- one-click helper
- Keychain integration
- auto-mount at login
Do not start by writing:
- a full custom desktop sync client
- a Finder extension
- a new filesystem driver
## 4. Cloud / web layer
Start from:
- stock Nextcloud
- current shell app
- Nextcloud browser/share/mobile primitives
Then later add:
- betterNAS-specific integration pages
- standalone control-plane web UI
- custom branding or replacement UI where justified
Do not start by writing:
- a full custom browser file manager
- a custom mobile client
- a custom sharing stack
## Recommended build order
### Phase A: make the storage path real
### Phase A: make the self-hosted mount path real
1. NAS node can expose a directory over WebDAV
2. control plane can register the node and its exports
3. local device can mount that export in Finder
1. node-service exposes a directory over WebDAV
2. control-server registers the node and its exports
3. web control plane shows the export and mount action
4. local device mounts the export in Finder
This is the shortest path to a real product loop.
### Phase B: make the product real
### Phase B: make the product model real
1. add durable users, nodes, exports, grants, mount profiles
2. add auth and token lifecycle
3. add a proper web UI for admin and user control flows
1. add users, devices, NAS nodes, exports, grants, mount profiles
2. add auth and policy
3. add a simple standalone web UI for admin/control use
### Phase C: make deployment real
This is where betterNAS becomes its own product.
1. define Docker self-hosting shape
2. define Nix-based NAS host install shape
3. define remote access story for non-local usage
### Phase C: add cloud mode
### Phase D: add optional adapter surfaces
1. connect the same storage into Nextcloud
2. expose browser/mobile/share flows
3. map Nextcloud behavior back to betterNAS product semantics
This is high leverage, but should not block Phase A.
## External parts we should deliberately reuse
### NAS node
- WebDAV server implementation
- Nix module patterns
### Control plane
- Go API service scaffold
- Postgres
- Redis
### Local device
- Finder's native WebDAV mounting
- macOS credential storage
### Cloud/web layer
- Nextcloud server
- Nextcloud app shell
- Nextcloud share/browser behavior
- Nextcloud mobile and desktop references
## From-scratch parts we should deliberately own
### NAS node
- node enrollment
- export registration
- machine identity and health reporting
### Control plane
- full backend domain model
- access and policy model
- mount profile generation
- cloud profile generation
- audit and registry
### Local device
- user-friendly mounting workflow
- helper app if needed
- local mount orchestration
### Cloud/web layer
- betterNAS-to-Nextcloud mapping layer
- standalone betterNAS product UI over time
## First scaffolds to use
| Part | First scaffold |
| --------------- | ------------------------------------------------------------- |
| NAS node | Nix flake/module + WebDAV server service config |
| Control plane | Go service + OpenAPI contract + Postgres/Redis adapters later |
| Local device | documented Finder mount flow, then lightweight helper app |
| Cloud/web layer | current Nextcloud scaffold and shell app |
## What not to overbuild early
- custom sync engine
- custom desktop client
- custom mobile app
- many backend services
- control-plane-in-the-data-path file proxy
Those can come later if the simpler stack proves insufficient.
1. add Nextcloud only if browser/share/mobile value justifies it
2. keep it out of the critical mount path
## Build goal for V1
V1 should prove one clean loop:
```text
user picks NAS export in betterNAS UI
-> control plane issues mount profile
-> local device mounts WebDAV export
-> user sees and uses files in Finder
-> optional Nextcloud surface exposes the same storage in cloud mode
user opens betterNAS web UI
-> sees a registered export
-> requests mount instructions
-> Finder mounts the WebDAV export
-> user sees and uses files from the NAS
```
If that loop works, the architecture is sound.
## TODO
- Choose the exact WebDAV server for the NAS node.
- Decide the first Nix module layout for node installation.
- Define the first database-backed control-plane entities.
- Decide whether the local device starts as documentation-only or a helper app.
- Decide when the Nextcloud cloud/web layer becomes user-facing in v1.

View file

@ -3,61 +3,76 @@
This file is the canonical contract for the repository.
If the planning docs, scaffold code, or future tasks disagree, this file and
[`packages/contracts`](../packages/contracts)
win.
[`packages/contracts`](../packages/contracts) win.
## The single first task
## Product default
Before splitting work across agents, do one foundation task:
betterNAS is self-hosted first.
- scaffold the four product parts
- lock the shared contracts
- define one end-to-end verification loop
- enforce clear ownership boundaries
For the current product shape, the user should be able to run the whole stack on
their NAS machine:
That first task should leave the repo in a state where later work can be
parallelized without interface drift.
- `node-service` serves the real files over WebDAV
- `control-server` owns auth, nodes, exports, grants, and mount profiles
- `web control plane` is the browser UI over the control-server
- the local device mounts an issued WebDAV URL in Finder
## The four parts
Optional hosted deployments can come later. Optional Nextcloud integration can
come later.
## The core system
```text
betterNAS canonical contract
[2] control plane
+-----------------------------------+
self-hosted on user's NAS
+--------------------------------------+
| [2] control-server |
| system of record |
| users / devices / nodes / grants |
| mount profiles / cloud profiles |
+---------+---------------+---------+
| |
control/API | | cloud adapter
v v
[1] NAS node [4] cloud / web layer
+-------------------+ +----------------------+
| WebDAV + node | | Nextcloud adapter |
| real file bytes | | browser / mobile |
+---------+---------+ +----------+-----------+
| ^
| mount profile |
v |
[3] local device --------------+
+----------------------+
| Finder mount/helper |
| native user entry |
+----------------------+
| auth / nodes / exports / grants |
| mount sessions / audit |
+------------------+-------------------+
|
v
+--------------------------------------+
| [1] node-service |
| WebDAV export runtime |
| real file bytes |
+------------------+-------------------+
^
|
+------------------+-------------------+
| [3] web control plane |
| onboarding / management / mount UX |
+------------------+-------------------+
^
|
user browser
user local device
|
+-----------------------------------------------> Finder mount
via issued WebDAV URL
[4] optional cloud adapter
|
+--> secondary browser/mobile/share layer
not part of the core mount path
```
## Non-negotiable rules
1. The control plane is the system of record.
2. File bytes should flow as directly as possible between the NAS node and the
local device.
3. The control plane should issue policy, grants, and profiles. It should not
become the default file proxy.
4. The NAS node should serve WebDAV directly whenever possible.
5. The local device consumes mount profiles. It does not hardcode infra details.
6. The cloud/web layer is optional and secondary. Nextcloud is an adapter, not
the product center.
1. `control-server` is the system of record.
2. `node-service` serves the bytes.
3. `web control plane` is a UI over `control-server`, not a second policy
backend.
4. The main data path should be `local device <-> node-service` whenever
possible.
5. `control-server` should issue access, grants, and mount profiles. It should
not become the default file proxy.
6. The self-hosted stack should work without Nextcloud.
7. Nextcloud, if used, is an optional adapter and secondary surface.
## Canonical sources of truth
@ -67,7 +82,7 @@ Use these in this order:
for boundaries, ownership, and delivery rules
2. [`packages/contracts`](../packages/contracts)
for machine-readable types, schemas, and route constants
3. the part docs for local detail:
3. the part docs:
- [`docs/01-nas-node.md`](./01-nas-node.md)
- [`docs/02-control-plane.md`](./02-control-plane.md)
- [`docs/03-local-device.md`](./03-local-device.md)
@ -84,31 +99,33 @@ The monorepo is split into these primary implementation lanes:
- [`apps/nextcloud-app`](../apps/nextcloud-app)
- [`packages/contracts`](../packages/contracts)
Every parallel task should primarily stay inside one of those lanes unless it is
an explicit contract task.
The first three are core. `apps/nextcloud-app` is optional and should not drive
the main architecture.
## The contract surface we need first
The first shared contract set should cover only the seams that let all four
parts exist at once.
The first shared contract set should cover only the seams needed for the
self-hosted mount flow.
### NAS node -> control plane
### Node-service -> control-server
- node registration
- node heartbeat
- export inventory
### Local device -> control plane
### Web control plane -> control-server
- list allowed exports
- auth/session bootstrapping
- list nodes and exports
- issue mount profile
- issue share or cloud profile later
### Cloud/web layer -> control plane
### Local device -> control-server
- issue cloud profile
- read export metadata
- fetch mount instructions
- receive issued WebDAV URL and credentials or token material
### Control plane internal
### Control-server internal
- health
- version
@ -117,57 +134,61 @@ parts exist at once.
- `StorageExport`
- `AccessGrant`
- `MountProfile`
- `CloudProfile`
- `AuditEvent`
## Parallel work boundaries
Each area gets an owner and a narrow write surface.
| Part | Owns | May read | Must not own |
| --------------- | ------------------------------------------------ | ----------------------------- | ------------------------------ |
| NAS node | node runtime, export reporting, WebDAV config | contracts, control-plane docs | product policy |
| Control plane | domain model, grants, profile issuance, registry | everything | direct file serving by default |
| Local device | mount UX, helper flows, credential handling | contracts, control-plane docs | access policy |
| Cloud/web layer | Nextcloud adapter, browser/mobile integration | contracts, control-plane docs | source of truth |
| ----------------- | ------------------------------------------------ | ------------------------------ | ------------------------------ |
| node-service | NAS runtime, WebDAV serving, export reporting | contracts, control-server docs | product policy |
| control-server | domain model, grants, profile issuance, registry | everything | direct file serving by default |
| web control plane | onboarding, node/export management, mount UX | contracts, control-server docs | source of truth |
| optional adapter | Nextcloud mapping and cloud surfaces | contracts, control-server docs | core mount path |
The only shared write surface across teams should be:
The shared write surface across parts should stay narrow:
- [`packages/contracts`](../packages/contracts)
- this file when the architecture contract changes
- this file when architecture changes
## Verification loop
This is the first loop every scaffold and agent should target.
This is the main loop every near-term task should support.
```text
[1] mock or real NAS node exposes a WebDAV export
-> [2] control plane registers the node and export
-> [3] local device asks for a mount profile
-> [3] local device receives a WebDAV mount URL
-> user can mount the export in Finder
-> [4] optional cloud/web layer can expose the same export in cloud mode
[node-service]
serves a WebDAV export
|
v
[control-server]
registers the node and export
issues a mount profile
|
v
[web control plane]
shows the export and mount action
|
v
[local device]
mounts the issued WebDAV URL in Finder
```
If a task does not help one of those steps become real, it is probably too
early.
If a task does not make one of those steps more real, it is probably too early.
## Definition of done for the foundation scaffold
## Definition of done for the current foundation
The initial scaffold is complete when:
The current foundation is in good shape when:
- all four parts have a documented entry point
- the control plane can represent nodes, exports, grants, and profiles
- the contracts package exports the first shared shapes and schemas
- local verification can prove the mount-profile loop end to end
- future agents can work inside one part without inventing new interfaces
- the self-hosted stack boots locally
- the control-server can represent nodes, exports, grants, and mount profiles
- the node-service serves a real WebDAV export
- the web control plane can expose the mount flow
- a local Mac can mount the export in Finder
## Rules for future tasks and agents
1. No part may invent private request or response shapes for shared flows.
2. Contract changes must update
[`packages/contracts`](../packages/contracts)
2. Contract changes must update [`packages/contracts`](../packages/contracts)
first.
3. Architecture changes must update this file in the same change.
4. Additive contract changes are preferred over breaking ones.
5. New tasks should target one part at a time unless they are explicitly
contract tasks.
5. Prioritize the self-hosted mount loop before optional cloud/mobile work.

View file

@ -1,46 +1,55 @@
# betterNAS References
This file tracks the upstream repos, tools, and docs we are likely to reuse, reference, fork from, or borrow ideas from as betterNAS evolves.
This file tracks the upstream repos, tools, and docs we are likely to reuse,
reference, fork from, or borrow ideas from as betterNAS evolves.
The goal is simple: do not lose the external pieces that give us leverage.
The ordering matters:
## NAS node
1. self-hosted WebDAV stack first
2. control-server and web control plane second
3. optional cloud adapter later
### WebDAV server candidates
## Primary now: self-hosted DAV stack
- `rclone serve webdav`
- repo: https://github.com/rclone/rclone
- why: fast way to stand up a WebDAV layer over existing storage
### Node-service and WebDAV
- Go WebDAV package
- docs: https://pkg.go.dev/golang.org/x/net/webdav
- why: embeddable WebDAV implementation for the NAS runtime
- `hacdias/webdav`
- repo: https://github.com/hacdias/webdav
- why: small standalone WebDAV server, easy to reason about
- why: small standalone WebDAV reference
- Apache `mod_dav`
- docs: https://httpd.apache.org/docs/current/mod/mod_dav.html
- why: standard WebDAV implementation if we want conventional infra
- `rclone serve webdav`
- repo: https://github.com/rclone/rclone
- why: useful reference for standing up WebDAV over existing storage
### Nix / host configuration
### Self-hosting and NAS configuration
- NixOS manual
- docs: https://nixos.org/manual/nixos/stable/
- why: host module design, service config, declarative machine setup
- why: host module design and declarative machine setup
- Nixpkgs
- repo: https://github.com/NixOS/nixpkgs
- why: reference for packaging and service modules
- why: service module and packaging reference
## Control plane
- Docker Compose docs
- docs: https://docs.docker.com/compose/
- why: current self-hosted runtime packaging baseline
## Primary now: control-server
### Backend and infra references
- Go routing enhancements
- docs: https://go.dev/blog/routing-enhancements
- why: best low-dependency baseline if we stay with the standard library
- why: low-dependency baseline for the API
- `chi`
- repo: https://github.com/go-chi/chi
- why: thin stdlib-friendly router if we want middleware and route groups
- why: thin router if stdlib becomes too bare
- PostgreSQL
- docs: https://www.postgresql.org/docs/
@ -48,7 +57,7 @@ The goal is simple: do not lose the external pieces that give us leverage.
- `pgx`
- repo: https://github.com/jackc/pgx
- why: Postgres-first Go driver and toolkit
- why: Postgres-first Go driver
- `sqlc`
- repo: https://github.com/sqlc-dev/sqlc
@ -56,96 +65,70 @@ The goal is simple: do not lose the external pieces that give us leverage.
- Redis
- docs: https://redis.io/docs/latest/
- why: cache, jobs, ephemeral coordination
- why: cache, jobs, and ephemeral coordination
- `go-redis`
- repo: https://github.com/redis/go-redis
- why: primary Redis client for Go
- why: primary Redis client
- `asynq`
- repo: https://github.com/hibiken/asynq
- why: practical Redis-backed background jobs
- `koanf`
- repo: https://github.com/knadh/koanf
- why: layered config if the control plane grows beyond env-only config
- `envconfig`
- repo: https://github.com/kelseyhightower/envconfig
- why: small env-only config loader
- `log/slog`
- docs: https://pkg.go.dev/log/slog
- why: structured logging without extra dependencies
- `oapi-codegen`
- repo: https://github.com/oapi-codegen/oapi-codegen
- why: generate Go and TS surfaces from OpenAPI with less drift
### SSH access / gateway reference
## Primary now: web control plane and local device
- `sshpiper`
- repo: https://github.com/tg123/sshpiper
- why: SSH proxy/gateway reference if we add SSH-brokered access later
### Web control plane
## Local device
- Next.js
- repo: https://github.com/vercel/next.js
- why: control-plane web UI
### macOS native mount references
- Turborepo
- docs: https://turborepo.dev/repo/docs/crafting-your-repository/structuring-a-repository
- why: monorepo boundaries and task graph rules
### macOS mount UX
- Apple Finder `Connect to Server`
- docs: https://support.apple.com/en-lamr/guide/mac-help/mchlp3015/mac
- why: baseline native mounting UX on macOS
- why: baseline native mount UX on macOS
- Apple Finder WebDAV mounting
- docs: https://support.apple.com/is-is/guide/mac-help/mchlp1546/mac
- why: direct WebDAV mount behavior in Finder
### macOS integration references
- Apple developer docs
- docs: https://developer.apple.com/documentation/
- why: Keychain, launch agents, desktop helpers, future native integration
- why: Keychain, helper apps, launch agents, and later native integration
- Keychain data protection
- docs: https://support.apple.com/guide/security/keychain-data-protection-secb0694df1a/web
- why: baseline secret-storage model for device credentials
- Finder Sync extensions
- docs: https://developer.apple.com/library/archive/documentation/General/Conceptual/ExtensibilityPG/Finder.html
- why: future helper-app integration pattern if Finder UX grows
- WebDAV RFC 4918
- docs: https://www.rfc-editor.org/rfc/rfc4918
- why: protocol semantics and caveats
## Cloud / web layer
## Optional later: cloud adapter
### Nextcloud server and app references
- Nextcloud server
- repo: https://github.com/nextcloud/server
- why: cloud/web/share substrate
- why: optional browser/share/mobile substrate
- Nextcloud app template
- repo: https://github.com/nextcloud/app_template
- why: official starting point for the thin shell app
- why: official starting point for the thin adapter app
- Nextcloud AppAPI / ExApps
- docs: https://docs.nextcloud.com/server/latest/admin_manual/exapps_management/AppAPIAndExternalApps.html
- why: external app integration model
### Nextcloud client references
- Nextcloud desktop
- repo: https://github.com/nextcloud/desktop
- why: Finder/cloud-drive style reference behavior
- Nextcloud iOS
- repo: https://github.com/nextcloud/ios
- why: mobile reference implementation
### Nextcloud storage and protocol references
- Nextcloud WebDAV access
- docs: https://docs.nextcloud.com/server/latest/user_manual/en/files/access_webdav.html
- why: protocol and client behavior reference
@ -154,20 +137,10 @@ The goal is simple: do not lose the external pieces that give us leverage.
- docs: https://docs.nextcloud.com/server/latest/user_manual/en/external_storage/external_storage.html
- why: storage aggregation reference
- Nextcloud theming / branded clients
- docs: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/theming.html
- why: future branding path if Nextcloud stays user-facing
## Frontend
- Next.js
- repo: https://github.com/vercel/next.js
- why: likely standalone control-plane web UI
## Working rule
Use these references in this order:
1. steal primitives that already solve boring problems
2. adapt them at the control-plane boundary
3. only fork or replace when the product meaningfully diverges
1. steal primitives that solve the self-hosted DAV problem first
2. adapt them at the control-server boundary
3. only pull in optional cloud layers when the core mount product is solid

View file

@ -7,17 +7,17 @@ Its job is simple:
- lock the repo shape
- lock the language per runtime
- lock the first shared contract surface
- give agents a safe place to work in parallel
- keep the list of upstream references we are stealing from
- keep the self-hosted stack clear
- make later scoped execution runs easier
## Repo shape
```text
betterNAS/
├── apps/
│ ├── web/ # Next.js control-plane UI
│ ├── control-plane/ # Go control-plane API
│ ├── node-agent/ # Go NAS runtime + WebDAV surface
│ ├── web/ # Next.js web control plane
│ ├── control-plane/ # Go control-server
│ ├── node-agent/ # Go node-service and WebDAV runtime
│ └── nextcloud-app/ # optional Nextcloud adapter
├── packages/
│ ├── contracts/ # canonical OpenAPI, schemas, TS types
@ -25,9 +25,9 @@ betterNAS/
│ ├── eslint-config/ # shared lint config
│ └── typescript-config/ # shared TS config
├── infra/
│ └── docker/ # local runtime stack
├── docs/ # architecture and part docs
├── scripts/ # local helper scripts
│ └── docker/ # self-hosted stack for local proof
├── docs/ # architecture and build docs
├── scripts/ # bootstrap, verify, and stack helpers
├── go.work # Go workspace
├── turbo.json # Turborepo task graph
└── skeleton.md # this file
@ -36,12 +36,53 @@ betterNAS/
## Runtime and language choices
| Part | Language | Why |
| -------------------- | ---------------------------------- | -------------------------------------------------------------------- |
| `apps/web` | TypeScript + Next.js | best UI velocity, best admin/control-plane UX |
| `apps/control-plane` | Go | strong concurrency, static binaries, operationally simple |
| `apps/node-agent` | Go | best fit for host runtime, WebDAV service, and future Nix deployment |
| `apps/nextcloud-app` | PHP | native language for the Nextcloud adapter surface |
| `packages/contracts` | OpenAPI + JSON Schema + TypeScript | language-neutral source of truth with practical TS ergonomics |
| -------------------- | ---------------------------------- | ------------------------------------------------------------------- |
| `apps/web` | TypeScript + Next.js | fastest way to build the control-plane UI |
| `apps/control-plane` | Go | strong backend baseline, static binaries, simple self-hosting |
| `apps/node-agent` | Go | best fit for NAS runtime, WebDAV serving, and future Nix deployment |
| `apps/nextcloud-app` | PHP | native language for an optional Nextcloud adapter |
| `packages/contracts` | OpenAPI + JSON Schema + TypeScript | language-neutral source of truth with practical frontend ergonomics |
## Default deployment model
The default product story is self-hosted:
```text
self-hosted betterNAS stack on user's NAS
+--------------------------------------------+
| web control plane |
| user opens this in browser |
+-------------------+------------------------+
|
v
+--------------------------------------------+
| control-server |
| auth / nodes / exports / grants |
| mount profile issuance |
+-------------------+------------------------+
|
v
+--------------------------------------------+
| node-service |
| WebDAV export runtime |
| real NAS files |
+--------------------------------------------+
user Mac
|
+--> browser -> web control plane
|
+--> Finder -> issued WebDAV mount URL
```
Optional later shape:
- hosted control-server
- hosted web control plane
- optional Nextcloud adapter for cloud/mobile/share surfaces
Those are not required for the core betterNAS product loop.
## Canonical contract rule
@ -52,10 +93,10 @@ The source of truth for shared interfaces is:
3. [`packages/contracts/schemas`](./packages/contracts/schemas)
4. [`packages/contracts/src`](./packages/contracts/src)
Agents must not invent private shared request or response shapes outside those
Agents must not invent shared request or response shapes outside those
locations.
## Parallel lanes
## Implementation lanes
```text
shared write surface
@ -64,27 +105,26 @@ locations.
| packages/contracts/ |
+----------------+--------------------------+
|
+-----------------+-----------------+-----------------+
| | | |
v v v v
NAS node control plane local device cloud layer
lane lane lane lane
+---------------+----------------+----------------+
| | |
v v v
node-service control-server web control plane
optional later:
nextcloud adapter
```
Allowed ownership:
- NAS node lane
- node-service lane
- `apps/node-agent`
- future `infra/nix/node-*`
- control-plane lane
- future `infra/nix` host module work
- control-server lane
- `apps/control-plane`
- DB and queue integration code later
- local-device lane
- mount docs first
- future helper app
- cloud layer lane
- web control plane lane
- `apps/web`
- optional adapter lane
- `apps/nextcloud-app`
- Nextcloud mapping logic
- shared contract lane
- `packages/contracts`
- `docs/architecture.md`
@ -92,24 +132,24 @@ Allowed ownership:
## The first verification loop
```text
[node-agent]
[node-service]
serves WebDAV export
|
v
[control-plane]
[control-server]
registers node + export
issues mount profile
|
v
[local device]
mounts WebDAV in Finder
[web control plane]
shows export and mount action
|
v
[cloud layer]
optionally exposes same export in Nextcloud
[local device]
mounts in Finder
```
If a task does not make one of those steps more real, it is probably too early.
This is the main product loop.
## Upstream references to steal from
@ -125,18 +165,14 @@ If a task does not make one of those steps more real, it is probably too early.
- Next.js backend-for-frontend guide
- https://nextjs.org/docs/app/guides/backend-for-frontend
- why: keep Next.js as UI/BFF, not the system-of-record backend
- why: keep Next.js as UI and orchestration surface, not the source-of-truth backend
### Go control plane
### Go control-server
- Go routing enhancements
- https://go.dev/blog/routing-enhancements
- why: stdlib-first routing baseline
- `chi`
- https://github.com/go-chi/chi
- why: minimal router if stdlib patterns become too bare
- `pgx`
- https://github.com/jackc/pgx
- why: Postgres-first Go driver
@ -153,23 +189,11 @@ If a task does not make one of those steps more real, it is probably too early.
- https://github.com/hibiken/asynq
- why: practical Redis-backed job system
- `koanf`
- https://github.com/knadh/koanf
- why: layered config if env-only config becomes too small
- `envconfig`
- https://github.com/kelseyhightower/envconfig
- why: tiny env-only config option
- `log/slog`
- https://pkg.go.dev/log/slog
- why: structured logging without adding a logging framework first
- `oapi-codegen`
- https://github.com/oapi-codegen/oapi-codegen
- why: generate Go and TS surfaces from OpenAPI
- why: generate surfaces from OpenAPI with less drift
### NAS node and WebDAV
### Node-service and WebDAV
- Go WebDAV package
- https://pkg.go.dev/golang.org/x/net/webdav
@ -183,11 +207,7 @@ If a task does not make one of those steps more real, it is probably too early.
- https://nixos.org/manual/nixos/stable/
- why: declarative host setup and service wiring
- Nixpkgs
- https://github.com/NixOS/nixpkgs
- why: service module and packaging reference
### Local device and mount UX
### Local mount UX
- Finder `Connect to Server`
- https://support.apple.com/en-lamr/guide/mac-help/mchlp3015/mac
@ -201,36 +221,20 @@ If a task does not make one of those steps more real, it is probably too early.
- https://support.apple.com/guide/security/keychain-data-protection-secb0694df1a/web
- why: local credential storage model
- Finder Sync extensions
- https://developer.apple.com/library/archive/documentation/General/Conceptual/ExtensibilityPG/Finder.html
- why: future helper app / Finder integration reference
- WebDAV RFC 4918
- https://www.rfc-editor.org/rfc/rfc4918
- why: protocol semantics and edge cases
### Cloud and adapter layer
### Optional cloud adapter
- Nextcloud app template
- https://github.com/nextcloud/app_template
- why: thin adapter app reference
- AppAPI / External Apps
- https://docs.nextcloud.com/server/latest/admin_manual/exapps_management/AppAPIAndExternalApps.html
- why: official external-app integration path
- Nextcloud WebDAV docs
- https://docs.nextcloud.com/server/latest/user_manual/en/files/access_webdav.html
- why: protocol/client behavior reference
- Nextcloud external storage
- https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/external_storage_configuration_gui.html
- why: storage aggregation behavior
- Nextcloud file sharing config
- https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/file_sharing_configuration.html
- why: share semantics reference
## What we steal vs what we own
### Steal
@ -240,22 +244,21 @@ If a task does not make one of those steps more real, it is probably too early.
- Go stdlib and proven Go infra libraries
- Go WebDAV implementation
- Finder native WebDAV mount UX
- Nextcloud shell-app and cloud/web primitives
- optional Nextcloud adapter primitives later
### Own
- the betterNAS domain model
- the control-plane API
- the control-server API
- the node registration and export model
- the mount profile model
- the mapping between cloud mode and mount mode
- the self-hosted stack wiring
- the repo contract and shared schemas
- the root `pnpm verify` loop
## The first implementation slices after this scaffold
## The next implementation slices
1. make `apps/node-agent` serve a real configurable WebDAV export
2. make `apps/control-plane` store real node/export records
3. issue real mount profiles from the control plane
4. make `apps/web` let a user pick an export and request a profile
5. keep `apps/nextcloud-app` thin and optional
1. make `apps/web` expose the real mount flow to a user
2. add durable control-server storage for nodes, exports, and grants
3. define the self-hosted NAS install shape for `apps/node-agent`
4. keep the optional cloud adapter out of the critical path