mirror of
https://github.com/harivansh-afk/betterNAS.git
synced 2026-04-15 20:03:08 +00:00
init specification 1
This commit is contained in:
parent
678ca148d5
commit
cceabd1e91
7 changed files with 243 additions and 0 deletions
|
|
@ -0,0 +1,2 @@
|
|||
schema: spec-driven
|
||||
created: 2026-03-31
|
||||
120
openspec/changes/scaffold-nextcloud-control-plane/design.md
Normal file
120
openspec/changes/scaffold-nextcloud-control-plane/design.md
Normal file
|
|
@ -0,0 +1,120 @@
|
|||
## Context
|
||||
|
||||
aiNAS is starting as a greenfield project with a clear product boundary: we want our own storage control plane, product UX, and business logic while using Nextcloud as an upstream backend for file storage, sync primitives, sharing primitives, and existing client compatibility. The repository is effectively empty today, so this change needs to establish both the architectural stance and the initial developer scaffold.
|
||||
|
||||
The main constraint is maintenance ownership. Forking `nextcloud/server` would move security patches, upstream upgrade churn, and internal compatibility risk onto aiNAS too early. At the same time, pushing all product logic into a traditional Nextcloud app would make our business rules hard to evolve and tightly couple the product to the PHP monolith. The design therefore needs to leave us with a thin in-Nextcloud surface and a separate aiNAS-owned service layer.
|
||||
|
||||
## Goals / Non-Goals
|
||||
|
||||
**Goals:**
|
||||
- Create a repository scaffold that supports local development with vanilla Nextcloud and aiNAS-owned services.
|
||||
- Define a thin Nextcloud shell app that handles navigation, branded entry points, and backend integration hooks.
|
||||
- Define an aiNAS control-plane service boundary for business logic, policy, and orchestration.
|
||||
- Keep interfaces typed and explicit so future web, desktop, and iOS clients can target aiNAS services rather than Nextcloud internals.
|
||||
- Make the initial architecture easy to extend without forcing a Nextcloud core fork.
|
||||
|
||||
**Non-Goals:**
|
||||
- Implement end-user storage features such as mounts, sync semantics, or sharing workflows in this change.
|
||||
- Build custom desktop or iOS clients in this change.
|
||||
- Replace Nextcloud's file storage, sync engine, or existing client stack.
|
||||
- Finalize long-term production deployment topology or multi-node scaling.
|
||||
|
||||
## Decisions
|
||||
|
||||
### 1. Use vanilla Nextcloud as an upstream backend, not a fork
|
||||
|
||||
aiNAS will run a stock Nextcloud instance in local development and future environments. We will extend it through a dedicated aiNAS app and service integrations instead of modifying core server code.
|
||||
|
||||
Rationale:
|
||||
- Keeps upstream upgrades and security patches tractable.
|
||||
- Lets us reuse mature file storage and client compatibility immediately.
|
||||
- Preserves an exit ramp if we later replace parts of the backend.
|
||||
|
||||
Alternatives considered:
|
||||
- Fork `nextcloud/server`: rejected due to long-term maintenance cost.
|
||||
- Build a custom storage platform first: rejected because it delays product iteration on higher-value workflows.
|
||||
|
||||
### 2. Keep the Nextcloud app thin and treat it as an adapter shell
|
||||
|
||||
The generated Nextcloud app will own aiNAS-specific UI entry points inside Nextcloud, settings pages, and integration hooks, but SHALL not become the home of core business logic. It will call aiNAS-owned APIs/services for control-plane decisions.
|
||||
|
||||
Rationale:
|
||||
- Keeps PHP app code small and replaceable.
|
||||
- Makes future non-Nextcloud clients first-class instead of afterthoughts.
|
||||
- Allows us to rewrite business logic without continually reshaping the shell app.
|
||||
|
||||
Alternatives considered:
|
||||
- Put most logic directly in the app: rejected because it couples product evolution to the monolith.
|
||||
|
||||
### 3. Scaffold an aiNAS control-plane service from the start
|
||||
|
||||
The repo will include a control-plane service that exposes internal HTTP APIs, owns domain models, and encapsulates policy and orchestration logic. In the first scaffold, this service may be packaged in an ExApp-compatible container, but the code structure SHALL keep Nextcloud-specific integration at the boundary rather than in domain logic.
|
||||
|
||||
Rationale:
|
||||
- Matches the product direction of aiNAS owning the control plane.
|
||||
- Gives one place for RBAC, storage policy, and orchestration logic to live.
|
||||
- Supports future desktop, iOS, and standalone web surfaces without coupling them to Nextcloud-rendered pages.
|
||||
|
||||
Alternatives considered:
|
||||
- Delay service creation and start with only a Nextcloud app: rejected because it encourages logic to accumulate in the wrong place.
|
||||
- Build multiple services immediately: rejected because one control-plane service is enough to establish the boundary.
|
||||
|
||||
### 4. Use a monorepo with explicit top-level boundaries
|
||||
|
||||
The initial scaffold will create clear top-level directories for infrastructure, app code, service code, shared contracts, docs, and scripts. The exact framework choices inside those directories can evolve, but the boundary layout should exist from day one.
|
||||
|
||||
Initial structure:
|
||||
- `docker/`: local orchestration and container assets
|
||||
- `apps/ainas-controlplane/`: generated Nextcloud shell app
|
||||
- `exapps/control-plane/`: aiNAS control-plane service, packaged for Nextcloud-compatible dev flows
|
||||
- `packages/contracts/`: shared schemas and API contracts
|
||||
- `docs/`: architecture and product model notes
|
||||
- `scripts/`: repeatable developer entry points
|
||||
|
||||
Rationale:
|
||||
- Makes ownership and coupling visible in the filesystem.
|
||||
- Supports gradual expansion into more services or clients without a repo rewrite.
|
||||
- Keeps the local developer story coherent.
|
||||
|
||||
Alternatives considered:
|
||||
- Single-app repo only: rejected because it hides important boundaries.
|
||||
- Many services on day one: rejected because it adds overhead before we know the cut lines.
|
||||
|
||||
### 5. Standardize on a Docker-based local platform first
|
||||
|
||||
The first scaffold will target a Docker Compose development environment that starts Nextcloud, its required backing services, and the aiNAS control-plane service. This gives a repeatable local runtime before we decide on production deployment.
|
||||
|
||||
Rationale:
|
||||
- Aligns with Nextcloud's easiest local development path.
|
||||
- Lowers friction for bootstrapping the first app and service.
|
||||
- Keeps infrastructure complexity proportional to the stage of the project.
|
||||
|
||||
Alternatives considered:
|
||||
- Nix-only local orchestration: rejected for now because the project needs a portable first runtime.
|
||||
- Production-like Kubernetes dev environment: rejected as premature.
|
||||
|
||||
## Risks / Trade-offs
|
||||
|
||||
- [Nextcloud coupling leaks into aiNAS service design] → Keep all Nextcloud-specific API calls and payload translation in adapter modules at the edge of the control-plane service.
|
||||
- [The shell app grows into a second control plane] → Enforce a rule that product decisions and persistent domain logic live in the control-plane service, not the Nextcloud app.
|
||||
- [ExApp packaging constrains future independence] → Structure the service so container packaging is a deployment concern rather than the application architecture.
|
||||
- [Initial repo layout may be wrong in details] → Optimize for a small number of strong boundaries now; revisit internal package names later without collapsing ownership boundaries.
|
||||
- [Docker dev environment differs from production NAS setups] → Treat the first environment as a development harness and keep storage/network assumptions explicit in docs.
|
||||
|
||||
## Migration Plan
|
||||
|
||||
1. Add the proposal artifacts that establish the architecture and scaffold requirements.
|
||||
2. Create the top-level repository layout and a Docker Compose development environment.
|
||||
3. Generate the Nextcloud shell app into `apps/ainas-controlplane/`.
|
||||
4. Scaffold the control-plane service and shared contracts package.
|
||||
5. Verify local startup, service discovery, and basic health paths before implementing product features.
|
||||
|
||||
Rollback strategy:
|
||||
- Because this is a greenfield scaffold, rollback is simply removing the generated directories and Compose wiring if the architectural choice changes early.
|
||||
|
||||
## Open Questions
|
||||
|
||||
- Should the first control-plane service be implemented in Go, Python, or Node/TypeScript?
|
||||
- What authentication boundary should exist between the Nextcloud shell app and the control-plane service in local development?
|
||||
- Which parts of future sharing and RBAC behavior should remain delegated to Nextcloud, and which should be modeled natively in aiNAS?
|
||||
- Do we want the first web product surface to live inside Nextcloud pages, outside Nextcloud as a separate frontend, or both?
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
## Why
|
||||
|
||||
aiNAS needs an initial architecture and repository scaffold that lets us build our own storage control plane without inheriting the maintenance cost of a Nextcloud core fork. We want to move quickly on product-specific business logic, but still stand on top of a mature backend for files, sync, sharing primitives, and existing clients.
|
||||
|
||||
## What Changes
|
||||
|
||||
- Create an initial aiNAS platform scaffold centered on vanilla Nextcloud running in Docker for local development.
|
||||
- Define a thin Nextcloud app shell that owns aiNAS-specific integration points, branded surfaces, and adapters into the Nextcloud backend.
|
||||
- Define a control-plane service boundary where aiNAS business logic, policy, and future orchestration will live outside the Nextcloud monolith.
|
||||
- Establish a repository layout for Docker infrastructure, Nextcloud app code, ExApp/service code, and shared API contracts.
|
||||
- Document the decision to treat Nextcloud as an upstream backend dependency rather than a forked application baseline.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### New Capabilities
|
||||
- `workspace-scaffold`: Repository structure and local development platform for running aiNAS with Nextcloud, service containers, and shared packages.
|
||||
- `nextcloud-shell-app`: Thin aiNAS app inside Nextcloud for navigation, settings, branded entry points, and backend integration hooks.
|
||||
- `control-plane-service`: External aiNAS service layer that owns business logic and exposes internal APIs used by the Nextcloud shell and future clients.
|
||||
|
||||
### Modified Capabilities
|
||||
- None.
|
||||
|
||||
## Impact
|
||||
|
||||
- Affected code: new repository layout under `docker/`, `apps/`, `exapps/`, `packages/`, `docs/`, and `scripts/`
|
||||
- Affected systems: local developer workflow, Docker-based service orchestration, Nextcloud runtime, AppAPI/ExApp integration path
|
||||
- Dependencies: Nextcloud, Docker Compose, AppAPI/ExApps, shared contract definitions
|
||||
- APIs: new internal control-plane APIs and service boundaries for future desktop, iOS, and web clients
|
||||
|
|
@ -0,0 +1,22 @@
|
|||
## ADDED Requirements
|
||||
|
||||
### Requirement: Dedicated control-plane service
|
||||
The system SHALL provide an aiNAS-owned control-plane service that is separate from the Nextcloud shell app and owns product domain logic.
|
||||
|
||||
#### Scenario: aiNAS adds a new control-plane rule
|
||||
- **WHEN** a new business rule for storage policy, RBAC, orchestration, or future client behavior is introduced
|
||||
- **THEN** the rule MUST be implemented in the control-plane service rather than as primary logic inside the Nextcloud app
|
||||
|
||||
### Requirement: Client-agnostic internal API
|
||||
The control-plane service SHALL expose internal APIs that can be consumed by the Nextcloud shell app and future aiNAS clients without requiring direct coupling to Nextcloud internals.
|
||||
|
||||
#### Scenario: New aiNAS client consumes control-plane behavior
|
||||
- **WHEN** aiNAS adds a web, desktop, or iOS surface outside Nextcloud
|
||||
- **THEN** that surface MUST be able to consume control-plane behavior through documented aiNAS service interfaces
|
||||
|
||||
### Requirement: Nextcloud backend adapter boundary
|
||||
The control-plane service SHALL isolate Nextcloud-specific integration at its boundary so that storage and sharing backends remain replaceable over time.
|
||||
|
||||
#### Scenario: Service calls the Nextcloud backend
|
||||
- **WHEN** the control-plane service needs to interact with file or sharing primitives provided by Nextcloud
|
||||
- **THEN** the interaction MUST pass through a dedicated adapter boundary instead of spreading Nextcloud-specific calls across unrelated domain code
|
||||
|
|
@ -0,0 +1,22 @@
|
|||
## ADDED Requirements
|
||||
|
||||
### Requirement: aiNAS shell app inside Nextcloud
|
||||
The system SHALL provide a dedicated aiNAS shell app inside Nextcloud that establishes branded entry points for aiNAS-owned product surfaces.
|
||||
|
||||
#### Scenario: aiNAS surface is visible in Nextcloud
|
||||
- **WHEN** the aiNAS app is installed in a local development environment
|
||||
- **THEN** Nextcloud MUST expose an aiNAS-branded application surface that can be used as the integration shell for future product flows
|
||||
|
||||
### Requirement: Thin adapter responsibility
|
||||
The aiNAS shell app SHALL act as an adapter layer and MUST keep core business logic outside the Nextcloud monolith.
|
||||
|
||||
#### Scenario: Product decision requires domain logic
|
||||
- **WHEN** the shell app needs information about policy, orchestration, or future product rules
|
||||
- **THEN** it MUST obtain that information through aiNAS-owned service boundaries instead of embedding the decision logic directly in the app
|
||||
|
||||
### Requirement: Nextcloud integration hooks
|
||||
The aiNAS shell app SHALL provide the minimal integration hooks required to connect aiNAS-owned services to Nextcloud runtime surfaces such as navigation, settings, and backend access points.
|
||||
|
||||
#### Scenario: aiNAS needs a Nextcloud-native entry point
|
||||
- **WHEN** aiNAS introduces a new product flow that starts from a Nextcloud-rendered page
|
||||
- **THEN** the shell app MUST provide a supported hook or page boundary where the flow can enter aiNAS-controlled logic
|
||||
|
|
@ -0,0 +1,22 @@
|
|||
## ADDED Requirements
|
||||
|
||||
### Requirement: Repository boundary scaffold
|
||||
The repository SHALL provide a top-level scaffold that separates infrastructure, Nextcloud app code, aiNAS-owned service code, shared contracts, documentation, and automation scripts.
|
||||
|
||||
#### Scenario: Fresh clone exposes expected boundaries
|
||||
- **WHEN** a developer inspects the repository after applying this change
|
||||
- **THEN** the repository MUST include dedicated locations for Docker runtime assets, the Nextcloud shell app, the control-plane service, shared contracts, documentation, and scripts
|
||||
|
||||
### Requirement: Local development platform
|
||||
The repository SHALL provide a local development runtime that starts a vanilla Nextcloud instance together with its required backing services and the aiNAS control-plane service.
|
||||
|
||||
#### Scenario: Developer boots the local stack
|
||||
- **WHEN** a developer runs the documented local startup flow
|
||||
- **THEN** the system MUST start Nextcloud and the aiNAS service dependencies without requiring a forked Nextcloud build
|
||||
|
||||
### Requirement: Shared contract package
|
||||
The repository SHALL include a shared contract location for schemas and service interfaces used between the Nextcloud shell app and aiNAS-owned services.
|
||||
|
||||
#### Scenario: Interface changes are modeled centrally
|
||||
- **WHEN** aiNAS defines an internal API or payload exchanged between the shell app and the control-plane service
|
||||
- **THEN** the schema MUST be represented in the shared contracts location rather than duplicated ad hoc across codebases
|
||||
27
openspec/changes/scaffold-nextcloud-control-plane/tasks.md
Normal file
27
openspec/changes/scaffold-nextcloud-control-plane/tasks.md
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
## 1. Repository and local platform scaffold
|
||||
|
||||
- [ ] 1.1 Create the top-level repository structure for `docker/`, `apps/`, `exapps/`, `packages/`, `docs/`, and `scripts/`
|
||||
- [ ] 1.2 Add a Docker Compose development stack for vanilla Nextcloud and its required backing services
|
||||
- [ ] 1.3 Add the aiNAS control-plane service container to the local development stack
|
||||
- [ ] 1.4 Add repeatable developer scripts and documentation for booting and stopping the local stack
|
||||
|
||||
## 2. Nextcloud shell app scaffold
|
||||
|
||||
- [ ] 2.1 Generate the aiNAS Nextcloud app scaffold into `apps/ainas-controlplane/`
|
||||
- [ ] 2.2 Configure the shell app with aiNAS branding, navigation entry points, and basic settings surface
|
||||
- [ ] 2.3 Add an adapter layer in the shell app for calling aiNAS-owned service endpoints
|
||||
- [ ] 2.4 Verify the shell app installs and loads in the local Nextcloud runtime
|
||||
|
||||
## 3. Control-plane service scaffold
|
||||
|
||||
- [ ] 3.1 Scaffold the aiNAS control-plane service in `exapps/control-plane/`
|
||||
- [ ] 3.2 Add a minimal internal HTTP API surface with health and version endpoints
|
||||
- [ ] 3.3 Create a dedicated Nextcloud adapter boundary inside the service for backend integrations
|
||||
- [ ] 3.4 Wire local service configuration so the shell app can discover and call the control-plane service
|
||||
|
||||
## 4. Shared contracts and verification
|
||||
|
||||
- [ ] 4.1 Create the shared contracts package for internal API schemas and payload definitions
|
||||
- [ ] 4.2 Define the initial contracts used between the shell app and the control-plane service
|
||||
- [ ] 4.3 Document the architectural boundary that keeps business logic out of the Nextcloud app
|
||||
- [ ] 4.4 Verify end-to-end local startup with Nextcloud, the shell app, and the control-plane service all reachable
|
||||
Loading…
Add table
Add a link
Reference in a new issue