Merge pull request #4 from harivansh-afk/doc-scaaffold

doc scaaffold
This commit is contained in:
Hari 2026-03-31 22:38:13 -04:00 committed by GitHub
commit c3b5332477
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
12 changed files with 278 additions and 263 deletions

68
docs/01-nas-node.md Normal file
View file

@ -0,0 +1,68 @@
# betterNAS Part 1: NAS Node
This document describes the software that runs on the actual NAS machine, VM, or workstation that owns the files.
## What it is
The NAS node is the machine that actually has the storage.
It should run:
- a WebDAV server
- a small betterNAS node agent
- declarative config via Nix
- optional tunnel or relay connection if the machine is not directly reachable
It should expose one or more storage exports such as:
- `/data`
- `/media`
- `/backups`
- `/vm-images`
## What it does
- serves the real file bytes
- exposes chosen directories over WebDAV
- registers itself with the control plane
- reports health, identity, and available exports
- optionally keeps an outbound connection alive for remote access
## What it should not do
- own user-facing product logic
- decide permissions by itself
- become the system of record for shares, devices, or policies
## Diagram
```text
betterNAS system
local device <-------> control plane <-------> cloud/web layer
| | |
| | |
+-------------------------+--------------------------+
|
v
+---------------------------+
| [THIS DOC] NAS node |
|---------------------------|
| WebDAV server |
| node agent |
| exported directories |
| optional tunnel/relay |
+---------------------------+
```
## Core decisions
- The NAS node should be where WebDAV is served from whenever possible.
- The control plane should configure access, but file bytes should flow from the node to the user device as directly as possible.
- The node should be installable with a Nix module or flake so setup is reproducible.
## TODO
- Choose the WebDAV server we will standardize on for the node.
- Define the node agent responsibilities and API back to the control plane.
- Define the storage export model: path, label, capacity, tags, protocol support.
- Define direct-access vs relayed-access behavior.
- Define how the node connects to the cloud/web layer for optional Nextcloud integration.

72
docs/02-control-plane.md Normal file
View file

@ -0,0 +1,72 @@
# betterNAS Part 2: Control Plane
This document describes the main backend that owns product semantics and coordinates the rest of the system.
## What it is
The control plane is the source of truth for betterNAS.
It should own:
- users
- devices
- NAS nodes
- storage exports
- access grants
- mount profiles
- cloud access profiles
- audit events
## What it does
- authenticates users and devices
- tracks which NAS nodes exist
- decides who can access which export
- issues mount instructions to local devices
- coordinates optional cloud/web access
- stores the operational model of the whole product
## What it should not do
- proxy file bytes unless absolutely necessary
- become a bottleneck in the data path
- depend on Nextcloud as its system of record
## Diagram
```text
betterNAS system
NAS node <---------> [THIS DOC] control plane <---------> local device
| | |
| | |
+---------------------------+-----------------------+-----------+
|
v
cloud/web layer
```
## Core decisions
- The control plane is the product brain.
- It should own policy and registry, not storage bytes.
- It should stay standalone even if it integrates with Nextcloud.
- It should issue access decisions, not act like a file server.
## Suggested first entities
- `User`
- `Device`
- `NasNode`
- `StorageExport`
- `AccessGrant`
- `MountProfile`
- `CloudProfile`
- `AuditEvent`
## TODO
- Define the first real domain model and database schema.
- Define auth between user device, NAS node, and control plane.
- Define the API for mount profiles and access grants.
- Define how the control plane tells the cloud/web layer what to expose.
- Define direct-access vs relay behavior for unreachable NAS nodes.

70
docs/03-local-device.md Normal file
View file

@ -0,0 +1,70 @@
# betterNAS Part 3: Local Device
This document describes the software and user experience on the user's Mac or other local device.
## What it is
The local device layer is how a user actually mounts and uses their NAS.
It can start simple:
- Finder + WebDAV mount
- manual `Connect to Server`
It can later grow into:
- a small desktop helper
- one-click mount flows
- auto-mount at login
- status and reconnect behavior
## What it does
- authenticates the user to betterNAS
- fetches allowed mount profiles from the control plane
- mounts approved storage exports locally
- gives the user a native-feeling way to browse files
## What it should not do
- invent its own permissions model
- hardcode NAS endpoints outside the control plane
- become tightly coupled to Nextcloud
## Diagram
```text
betterNAS system
NAS node <---------> control plane <---------> [THIS DOC] local device
| | |
| | |
+---------------------------+-----------------------+-----------+
|
v
cloud/web layer
```
## Core decisions
- V1 can rely on native Finder WebDAV mounting.
- A lightweight helper app is likely enough before a full custom client.
- The local device should consume mount profiles, not raw infrastructure details.
## User modes
### Mount mode
- user mounts a NAS export into Finder
- files are browsed as a mounted remote disk
### Cloud mode
- user accesses the same storage through browser/mobile/cloud surfaces
- this is not the same as a mounted filesystem
## TODO
- Define the mount profile format the control plane returns.
- Decide what the first local UX is: manual Finder flow, helper app, or both.
- Define credential storage and Keychain behavior.
- Define auto-mount, reconnect, and offline expectations.
- Define how the local device hands off to the cloud/web layer when mount mode is not enough.

View file

@ -0,0 +1,68 @@
# betterNAS Part 4: Cloud / Web Layer
This document describes the optional browser, mobile, and cloud-drive style access layer.
## What it is
The cloud/web layer is the part of betterNAS that makes storage accessible beyond local mounts.
This is where we can reuse Nextcloud heavily for:
- browser file UI
- uploads and downloads
- sharing links
- WebDAV-based cloud access
- mobile reference behavior
## What it does
- gives users a browser-based file experience
- supports sharing and link-based access
- gives us a cloud mode in addition to mount mode
- can act as a reference surface while the main betterNAS product grows
## What it should not do
- own the product system of record
- become the only way users access storage
- swallow control-plane logic that should stay in betterNAS
## Diagram
```text
betterNAS system
NAS node <---------> control plane <---------> local device
| | |
| | |
+---------------------------+-----------------------+-----------+
|
v
+----------------------+
| [THIS DOC] cloud/web |
|----------------------|
| Nextcloud adapter |
| browser UI |
| sharing / mobile |
+----------------------+
```
## Core decisions
- The cloud/web layer is optional but very high leverage.
- Nextcloud is a strong fit here because it already gives us file UI and sharing primitives.
- It should sit beside mount mode, not replace it.
## Likely role of Nextcloud
- browser-based file UI
- share and link management
- optional mobile and cloud-drive style access
- adapter over the same storage exports the control plane knows about
## TODO
- Decide whether Nextcloud is directly user-facing in v1 or mostly an adapter behind betterNAS.
- Define how storage exports from the NAS node appear in the cloud/web layer.
- Define how shares in this layer map back to control-plane access grants.
- Define what mobile access looks like in v1.
- Define branding and how much of the cloud/web layer stays stock vs customized.

View file

@ -1,2 +0,0 @@
schema: spec-driven
created: 2026-03-31

View file

@ -1,120 +0,0 @@
## Context
betterNAS is starting as a greenfield project with a clear product boundary: we want our own storage control plane, product UX, and business logic while using Nextcloud as an upstream backend for file storage, sync primitives, sharing primitives, and existing client compatibility. The repository is effectively empty today, so this change needs to establish both the architectural stance and the initial developer scaffold.
The main constraint is maintenance ownership. Forking `nextcloud/server` would move security patches, upstream upgrade churn, and internal compatibility risk onto betterNAS too early. At the same time, pushing all product logic into a traditional Nextcloud app would make our business rules hard to evolve and tightly couple the product to the PHP monolith. The design therefore needs to leave us with a thin in-Nextcloud surface and a separate betterNAS-owned service layer.
## Goals / Non-Goals
**Goals:**
- Create a repository scaffold that supports local development with vanilla Nextcloud and betterNAS-owned services.
- Define a thin Nextcloud shell app that handles navigation, branded entry points, and backend integration hooks.
- Define an betterNAS control-plane service boundary for business logic, policy, and orchestration.
- Keep interfaces typed and explicit so future web, desktop, and iOS clients can target betterNAS services rather than Nextcloud internals.
- Make the initial architecture easy to extend without forcing a Nextcloud core fork.
**Non-Goals:**
- Implement end-user storage features such as mounts, sync semantics, or sharing workflows in this change.
- Build custom desktop or iOS clients in this change.
- Replace Nextcloud's file storage, sync engine, or existing client stack.
- Finalize long-term production deployment topology or multi-node scaling.
## Decisions
### 1. Use vanilla Nextcloud as an upstream backend, not a fork
betterNAS will run a stock Nextcloud instance in local development and future environments. We will extend it through a dedicated betterNAS app and service integrations instead of modifying core server code.
Rationale:
- Keeps upstream upgrades and security patches tractable.
- Lets us reuse mature file storage and client compatibility immediately.
- Preserves an exit ramp if we later replace parts of the backend.
Alternatives considered:
- Fork `nextcloud/server`: rejected due to long-term maintenance cost.
- Build a custom storage platform first: rejected because it delays product iteration on higher-value workflows.
### 2. Keep the Nextcloud app thin and treat it as an adapter shell
The generated Nextcloud app will own betterNAS-specific UI entry points inside Nextcloud, settings pages, and integration hooks, but SHALL not become the home of core business logic. It will call betterNAS-owned APIs/services for control-plane decisions.
Rationale:
- Keeps PHP app code small and replaceable.
- Makes future non-Nextcloud clients first-class instead of afterthoughts.
- Allows us to rewrite business logic without continually reshaping the shell app.
Alternatives considered:
- Put most logic directly in the app: rejected because it couples product evolution to the monolith.
### 3. Scaffold an betterNAS control-plane service from the start
The repo will include a control-plane service that exposes internal HTTP APIs, owns domain models, and encapsulates policy and orchestration logic. In the first scaffold, this service may be packaged in an ExApp-compatible container, but the code structure SHALL keep Nextcloud-specific integration at the boundary rather than in domain logic.
Rationale:
- Matches the product direction of betterNAS owning the control plane.
- Gives one place for RBAC, storage policy, and orchestration logic to live.
- Supports future desktop, iOS, and standalone web surfaces without coupling them to Nextcloud-rendered pages.
Alternatives considered:
- Delay service creation and start with only a Nextcloud app: rejected because it encourages logic to accumulate in the wrong place.
- Build multiple services immediately: rejected because one control-plane service is enough to establish the boundary.
### 4. Use a monorepo with explicit top-level boundaries
The initial scaffold will create clear top-level directories for infrastructure, app code, service code, shared contracts, docs, and scripts. The exact framework choices inside those directories can evolve, but the boundary layout should exist from day one.
Initial structure:
- `docker/`: local orchestration and container assets
- `apps/ainas-controlplane/`: generated Nextcloud shell app
- `exapps/control-plane/`: betterNAS control-plane service, packaged for Nextcloud-compatible dev flows
- `packages/contracts/`: shared schemas and API contracts
- `docs/`: architecture and product model notes
- `scripts/`: repeatable developer entry points
Rationale:
- Makes ownership and coupling visible in the filesystem.
- Supports gradual expansion into more services or clients without a repo rewrite.
- Keeps the local developer story coherent.
Alternatives considered:
- Single-app repo only: rejected because it hides important boundaries.
- Many services on day one: rejected because it adds overhead before we know the cut lines.
### 5. Standardize on a Docker-based local platform first
The first scaffold will target a Docker Compose development environment that starts Nextcloud, its required backing services, and the betterNAS control-plane service. This gives a repeatable local runtime before we decide on production deployment.
Rationale:
- Aligns with Nextcloud's easiest local development path.
- Lowers friction for bootstrapping the first app and service.
- Keeps infrastructure complexity proportional to the stage of the project.
Alternatives considered:
- Nix-only local orchestration: rejected for now because the project needs a portable first runtime.
- Production-like Kubernetes dev environment: rejected as premature.
## Risks / Trade-offs
- [Nextcloud coupling leaks into betterNAS service design] → Keep all Nextcloud-specific API calls and payload translation in adapter modules at the edge of the control-plane service.
- [The shell app grows into a second control plane] → Enforce a rule that product decisions and persistent domain logic live in the control-plane service, not the Nextcloud app.
- [ExApp packaging constrains future independence] → Structure the service so container packaging is a deployment concern rather than the application architecture.
- [Initial repo layout may be wrong in details] → Optimize for a small number of strong boundaries now; revisit internal package names later without collapsing ownership boundaries.
- [Docker dev environment differs from production NAS setups] → Treat the first environment as a development harness and keep storage/network assumptions explicit in docs.
## Migration Plan
1. Add the proposal artifacts that establish the architecture and scaffold requirements.
2. Create the top-level repository layout and a Docker Compose development environment.
3. Generate the Nextcloud shell app into `apps/ainas-controlplane/`.
4. Scaffold the control-plane service and shared contracts package.
5. Verify local startup, service discovery, and basic health paths before implementing product features.
Rollback strategy:
- Because this is a greenfield scaffold, rollback is simply removing the generated directories and Compose wiring if the architectural choice changes early.
## Open Questions
- Should the first control-plane service be implemented in Go, Python, or Node/TypeScript?
- What authentication boundary should exist between the Nextcloud shell app and the control-plane service in local development?
- Which parts of future sharing and RBAC behavior should remain delegated to Nextcloud, and which should be modeled natively in betterNAS?
- Do we want the first web product surface to live inside Nextcloud pages, outside Nextcloud as a separate frontend, or both?

View file

@ -1,28 +0,0 @@
## Why
betterNAS needs an initial architecture and repository scaffold that lets us build our own storage control plane without inheriting the maintenance cost of a Nextcloud core fork. We want to move quickly on product-specific business logic, but still stand on top of a mature backend for files, sync, sharing primitives, and existing clients.
## What Changes
- Create an initial betterNAS platform scaffold centered on vanilla Nextcloud running in Docker for local development.
- Define a thin Nextcloud app shell that owns betterNAS-specific integration points, branded surfaces, and adapters into the Nextcloud backend.
- Define a control-plane service boundary where betterNAS business logic, policy, and future orchestration will live outside the Nextcloud monolith.
- Establish a repository layout for Docker infrastructure, Nextcloud app code, ExApp/service code, and shared API contracts.
- Document the decision to treat Nextcloud as an upstream backend dependency rather than a forked application baseline.
## Capabilities
### New Capabilities
- `workspace-scaffold`: Repository structure and local development platform for running betterNAS with Nextcloud, service containers, and shared packages.
- `nextcloud-shell-app`: Thin betterNAS app inside Nextcloud for navigation, settings, branded entry points, and backend integration hooks.
- `control-plane-service`: External betterNAS service layer that owns business logic and exposes internal APIs used by the Nextcloud shell and future clients.
### Modified Capabilities
- None.
## Impact
- Affected code: new repository layout under `docker/`, `apps/`, `exapps/`, `packages/`, `docs/`, and `scripts/`
- Affected systems: local developer workflow, Docker-based service orchestration, Nextcloud runtime, AppAPI/ExApp integration path
- Dependencies: Nextcloud, Docker Compose, AppAPI/ExApps, shared contract definitions
- APIs: new internal control-plane APIs and service boundaries for future desktop, iOS, and web clients

View file

@ -1,22 +0,0 @@
## ADDED Requirements
### Requirement: Dedicated control-plane service
The system SHALL provide an betterNAS-owned control-plane service that is separate from the Nextcloud shell app and owns product domain logic.
#### Scenario: betterNAS adds a new control-plane rule
- **WHEN** a new business rule for storage policy, RBAC, orchestration, or future client behavior is introduced
- **THEN** the rule MUST be implemented in the control-plane service rather than as primary logic inside the Nextcloud app
### Requirement: Client-agnostic internal API
The control-plane service SHALL expose internal APIs that can be consumed by the Nextcloud shell app and future betterNAS clients without requiring direct coupling to Nextcloud internals.
#### Scenario: New betterNAS client consumes control-plane behavior
- **WHEN** betterNAS adds a web, desktop, or iOS surface outside Nextcloud
- **THEN** that surface MUST be able to consume control-plane behavior through documented betterNAS service interfaces
### Requirement: Nextcloud backend adapter boundary
The control-plane service SHALL isolate Nextcloud-specific integration at its boundary so that storage and sharing backends remain replaceable over time.
#### Scenario: Service calls the Nextcloud backend
- **WHEN** the control-plane service needs to interact with file or sharing primitives provided by Nextcloud
- **THEN** the interaction MUST pass through a dedicated adapter boundary instead of spreading Nextcloud-specific calls across unrelated domain code

View file

@ -1,22 +0,0 @@
## ADDED Requirements
### Requirement: betterNAS shell app inside Nextcloud
The system SHALL provide a dedicated betterNAS shell app inside Nextcloud that establishes branded entry points for betterNAS-owned product surfaces.
#### Scenario: betterNAS surface is visible in Nextcloud
- **WHEN** the betterNAS app is installed in a local development environment
- **THEN** Nextcloud MUST expose an betterNAS-branded application surface that can be used as the integration shell for future product flows
### Requirement: Thin adapter responsibility
The betterNAS shell app SHALL act as an adapter layer and MUST keep core business logic outside the Nextcloud monolith.
#### Scenario: Product decision requires domain logic
- **WHEN** the shell app needs information about policy, orchestration, or future product rules
- **THEN** it MUST obtain that information through betterNAS-owned service boundaries instead of embedding the decision logic directly in the app
### Requirement: Nextcloud integration hooks
The betterNAS shell app SHALL provide the minimal integration hooks required to connect betterNAS-owned services to Nextcloud runtime surfaces such as navigation, settings, and backend access points.
#### Scenario: betterNAS needs a Nextcloud-native entry point
- **WHEN** betterNAS introduces a new product flow that starts from a Nextcloud-rendered page
- **THEN** the shell app MUST provide a supported hook or page boundary where the flow can enter betterNAS-controlled logic

View file

@ -1,22 +0,0 @@
## ADDED Requirements
### Requirement: Repository boundary scaffold
The repository SHALL provide a top-level scaffold that separates infrastructure, Nextcloud app code, betterNAS-owned service code, shared contracts, documentation, and automation scripts.
#### Scenario: Fresh clone exposes expected boundaries
- **WHEN** a developer inspects the repository after applying this change
- **THEN** the repository MUST include dedicated locations for Docker runtime assets, the Nextcloud shell app, the control-plane service, shared contracts, documentation, and scripts
### Requirement: Local development platform
The repository SHALL provide a local development runtime that starts a vanilla Nextcloud instance together with its required backing services and the betterNAS control-plane service.
#### Scenario: Developer boots the local stack
- **WHEN** a developer runs the documented local startup flow
- **THEN** the system MUST start Nextcloud and the betterNAS service dependencies without requiring a forked Nextcloud build
### Requirement: Shared contract package
The repository SHALL include a shared contract location for schemas and service interfaces used between the Nextcloud shell app and betterNAS-owned services.
#### Scenario: Interface changes are modeled centrally
- **WHEN** betterNAS defines an internal API or payload exchanged between the shell app and the control-plane service
- **THEN** the schema MUST be represented in the shared contracts location rather than duplicated ad hoc across codebases

View file

@ -1,27 +0,0 @@
## 1. Repository and local platform scaffold
- [x] 1.1 Create the top-level repository structure for `docker/`, `apps/`, `exapps/`, `packages/`, `docs/`, and `scripts/`
- [x] 1.2 Add a Docker Compose development stack for vanilla Nextcloud and its required backing services
- [x] 1.3 Add the betterNAS control-plane service container to the local development stack
- [x] 1.4 Add repeatable developer scripts and documentation for booting and stopping the local stack
## 2. Nextcloud shell app scaffold
- [x] 2.1 Generate the betterNAS Nextcloud app scaffold into `apps/ainas-controlplane/`
- [x] 2.2 Configure the shell app with betterNAS branding, navigation entry points, and basic settings surface
- [x] 2.3 Add an adapter layer in the shell app for calling betterNAS-owned service endpoints
- [x] 2.4 Verify the shell app installs and loads in the local Nextcloud runtime
## 3. Control-plane service scaffold
- [x] 3.1 Scaffold the betterNAS control-plane service in `exapps/control-plane/`
- [x] 3.2 Add a minimal internal HTTP API surface with health and version endpoints
- [x] 3.3 Create a dedicated Nextcloud adapter boundary inside the service for backend integrations
- [x] 3.4 Wire local service configuration so the shell app can discover and call the control-plane service
## 4. Shared contracts and verification
- [x] 4.1 Create the shared contracts package for internal API schemas and payload definitions
- [x] 4.2 Define the initial contracts used between the shell app and the control-plane service
- [x] 4.3 Document the architectural boundary that keeps business logic out of the Nextcloud app
- [x] 4.4 Verify end-to-end local startup with Nextcloud, the shell app, and the control-plane service all reachable

View file

@ -1,20 +0,0 @@
schema: spec-driven
# Project context (optional)
# This is shown to AI when creating artifacts.
# Add your tech stack, conventions, style guides, domain knowledge, etc.
# Example:
# context: |
# Tech stack: TypeScript, React, Node.js
# We use conventional commits
# Domain: e-commerce platform
# Per-artifact rules (optional)
# Add custom rules for specific artifacts.
# Example:
# rules:
# proposal:
# - Keep proposals under 500 words
# - Always include a "Non-goals" section
# tasks:
# - Break tasks into chunks of max 2 hours