docs(01): create phase plan
Phase 01: Core Infrastructure & Security - 5 plans in 3 waves - 3 parallel (Wave 1-2), 1 sequential (Wave 3) - Ready for execution Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
parent
d07a204cd5
commit
262a32673b
6 changed files with 1651 additions and 3 deletions
|
|
@ -35,10 +35,14 @@ Decimal phases appear between their surrounding integers in numeric order.
|
||||||
4. API endpoints enforce rate limiting and CSRF protection
|
4. API endpoints enforce rate limiting and CSRF protection
|
||||||
5. ISO builds execute in sandboxed containers (systemd-nspawn) with no host access
|
5. ISO builds execute in sandboxed containers (systemd-nspawn) with no host access
|
||||||
6. Build environment produces deterministic ISOs (identical input = identical hash)
|
6. Build environment produces deterministic ISOs (identical input = identical hash)
|
||||||
**Plans**: TBD
|
**Plans**: 5 plans
|
||||||
|
|
||||||
Plans:
|
Plans:
|
||||||
- [ ] 01-01: TBD (during phase planning)
|
- [ ] 01-01-PLAN.md — FastAPI project setup with health endpoints
|
||||||
|
- [ ] 01-02-PLAN.md — PostgreSQL database with async SQLAlchemy and Alembic
|
||||||
|
- [ ] 01-03-PLAN.md — Security middleware (rate limiting, CSRF, headers)
|
||||||
|
- [ ] 01-04-PLAN.md — Caddy HTTPS and database backup automation
|
||||||
|
- [ ] 01-05-PLAN.md — systemd-nspawn sandbox with deterministic builds
|
||||||
|
|
||||||
### Phase 2: Overlay System Foundation
|
### Phase 2: Overlay System Foundation
|
||||||
**Goal**: Layer-based configuration system with dependency tracking and composition
|
**Goal**: Layer-based configuration system with dependency tracking and composition
|
||||||
|
|
@ -185,7 +189,7 @@ Phases execute in numeric order: 1 → 2 → 3 → 4 → 5 → 6 → 7 → 8 →
|
||||||
|
|
||||||
| Phase | Plans Complete | Status | Completed |
|
| Phase | Plans Complete | Status | Completed |
|
||||||
|-------|----------------|--------|-----------|
|
|-------|----------------|--------|-----------|
|
||||||
| 1. Core Infrastructure & Security | 0/TBD | Not started | - |
|
| 1. Core Infrastructure & Security | 0/5 | Planned | - |
|
||||||
| 2. Overlay System Foundation | 0/TBD | Not started | - |
|
| 2. Overlay System Foundation | 0/TBD | Not started | - |
|
||||||
| 3. Build Queue & Workers | 0/TBD | Not started | - |
|
| 3. Build Queue & Workers | 0/TBD | Not started | - |
|
||||||
| 4. User Accounts | 0/TBD | Not started | - |
|
| 4. User Accounts | 0/TBD | Not started | - |
|
||||||
|
|
|
||||||
206
.planning/phases/01-core-infrastructure-security/01-01-PLAN.md
Normal file
206
.planning/phases/01-core-infrastructure-security/01-01-PLAN.md
Normal file
|
|
@ -0,0 +1,206 @@
|
||||||
|
---
|
||||||
|
phase: 01-core-infrastructure-security
|
||||||
|
plan: 01
|
||||||
|
type: execute
|
||||||
|
wave: 1
|
||||||
|
depends_on: []
|
||||||
|
files_modified:
|
||||||
|
- pyproject.toml
|
||||||
|
- backend/app/__init__.py
|
||||||
|
- backend/app/main.py
|
||||||
|
- backend/app/core/__init__.py
|
||||||
|
- backend/app/core/config.py
|
||||||
|
- backend/app/api/__init__.py
|
||||||
|
- backend/app/api/v1/__init__.py
|
||||||
|
- backend/app/api/v1/router.py
|
||||||
|
- backend/app/api/v1/endpoints/__init__.py
|
||||||
|
- backend/app/api/v1/endpoints/health.py
|
||||||
|
- .env.example
|
||||||
|
autonomous: true
|
||||||
|
|
||||||
|
must_haves:
|
||||||
|
truths:
|
||||||
|
- "FastAPI app starts without errors"
|
||||||
|
- "Health endpoint returns 200 OK"
|
||||||
|
- "Configuration loads from environment variables"
|
||||||
|
- "Project dependencies install via uv"
|
||||||
|
artifacts:
|
||||||
|
- path: "pyproject.toml"
|
||||||
|
provides: "Project configuration and dependencies"
|
||||||
|
contains: "fastapi"
|
||||||
|
- path: "backend/app/main.py"
|
||||||
|
provides: "FastAPI application entry point"
|
||||||
|
exports: ["app"]
|
||||||
|
- path: "backend/app/core/config.py"
|
||||||
|
provides: "Application configuration via pydantic-settings"
|
||||||
|
contains: "BaseSettings"
|
||||||
|
- path: "backend/app/api/v1/endpoints/health.py"
|
||||||
|
provides: "Health check endpoint"
|
||||||
|
contains: "@router.get"
|
||||||
|
key_links:
|
||||||
|
- from: "backend/app/main.py"
|
||||||
|
to: "backend/app/api/v1/router.py"
|
||||||
|
via: "include_router"
|
||||||
|
pattern: "app\\.include_router"
|
||||||
|
- from: "backend/app/api/v1/router.py"
|
||||||
|
to: "backend/app/api/v1/endpoints/health.py"
|
||||||
|
via: "include_router"
|
||||||
|
pattern: "router\\.include_router"
|
||||||
|
---
|
||||||
|
|
||||||
|
<objective>
|
||||||
|
Establish the FastAPI backend project structure with configuration management and basic health endpoint.
|
||||||
|
|
||||||
|
Purpose: Create the foundational Python project that all subsequent infrastructure builds upon.
|
||||||
|
Output: A runnable FastAPI application with proper project structure, dependency management via uv, and environment-based configuration.
|
||||||
|
</objective>
|
||||||
|
|
||||||
|
<execution_context>
|
||||||
|
@/home/mikkel/.claude/get-shit-done/workflows/execute-plan.md
|
||||||
|
@/home/mikkel/.claude/get-shit-done/templates/summary.md
|
||||||
|
</execution_context>
|
||||||
|
|
||||||
|
<context>
|
||||||
|
@.planning/PROJECT.md
|
||||||
|
@.planning/ROADMAP.md
|
||||||
|
@.planning/phases/01-core-infrastructure-security/01-RESEARCH.md (Standard Stack section, Architecture Patterns section)
|
||||||
|
</context>
|
||||||
|
|
||||||
|
<tasks>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 1: Initialize Python project with uv and dependencies</name>
|
||||||
|
<files>pyproject.toml, .env.example</files>
|
||||||
|
<action>
|
||||||
|
Create pyproject.toml with:
|
||||||
|
- Project name: debate-backend
|
||||||
|
- Python version: >=3.12
|
||||||
|
- Dependencies from research standard stack:
|
||||||
|
- fastapi[all]>=0.128.0
|
||||||
|
- uvicorn[standard]>=0.30.0
|
||||||
|
- sqlalchemy[asyncio]>=2.0.0
|
||||||
|
- asyncpg<0.29.0
|
||||||
|
- alembic
|
||||||
|
- pydantic>=2.12.0
|
||||||
|
- pydantic-settings
|
||||||
|
- slowapi
|
||||||
|
- fastapi-csrf-protect
|
||||||
|
- python-multipart
|
||||||
|
- Dev dependencies:
|
||||||
|
- pytest
|
||||||
|
- pytest-asyncio
|
||||||
|
- pytest-cov
|
||||||
|
- httpx
|
||||||
|
- ruff
|
||||||
|
- mypy
|
||||||
|
|
||||||
|
Configure ruff in pyproject.toml:
|
||||||
|
- line-length = 88
|
||||||
|
- target-version = "py312"
|
||||||
|
- select = ["E", "F", "I", "N", "W", "UP"]
|
||||||
|
|
||||||
|
Create .env.example with documented environment variables:
|
||||||
|
- DATABASE_URL (postgresql+asyncpg://...)
|
||||||
|
- SECRET_KEY (for JWT/CSRF)
|
||||||
|
- ENVIRONMENT (development/production)
|
||||||
|
- DEBUG (true/false)
|
||||||
|
- ALLOWED_HOSTS (comma-separated)
|
||||||
|
- ALLOWED_ORIGINS (comma-separated, for CORS)
|
||||||
|
|
||||||
|
Initialize project with uv: `uv venv && uv pip install -e ".[dev]"`
|
||||||
|
</action>
|
||||||
|
<verify>
|
||||||
|
Run: `cd /home/mikkel/repos/debate && uv pip list | grep -E "(fastapi|uvicorn|sqlalchemy|pydantic)"`
|
||||||
|
Expected: All core dependencies listed with correct versions.
|
||||||
|
</verify>
|
||||||
|
<done>
|
||||||
|
pyproject.toml exists with all specified dependencies, virtual environment created, packages installed.
|
||||||
|
</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 2: Create FastAPI application structure with health endpoint</name>
|
||||||
|
<files>
|
||||||
|
backend/app/__init__.py
|
||||||
|
backend/app/main.py
|
||||||
|
backend/app/core/__init__.py
|
||||||
|
backend/app/core/config.py
|
||||||
|
backend/app/api/__init__.py
|
||||||
|
backend/app/api/v1/__init__.py
|
||||||
|
backend/app/api/v1/router.py
|
||||||
|
backend/app/api/v1/endpoints/__init__.py
|
||||||
|
backend/app/api/v1/endpoints/health.py
|
||||||
|
</files>
|
||||||
|
<action>
|
||||||
|
Create directory structure following research architecture:
|
||||||
|
```
|
||||||
|
backend/
|
||||||
|
app/
|
||||||
|
__init__.py
|
||||||
|
main.py
|
||||||
|
core/
|
||||||
|
__init__.py
|
||||||
|
config.py
|
||||||
|
api/
|
||||||
|
__init__.py
|
||||||
|
v1/
|
||||||
|
__init__.py
|
||||||
|
router.py
|
||||||
|
endpoints/
|
||||||
|
__init__.py
|
||||||
|
health.py
|
||||||
|
```
|
||||||
|
|
||||||
|
backend/app/core/config.py:
|
||||||
|
- Use pydantic-settings BaseSettings
|
||||||
|
- Load: database_url, secret_key, environment, debug, allowed_hosts, allowed_origins
|
||||||
|
- Parse allowed_hosts and allowed_origins as lists (comma-separated in env)
|
||||||
|
- Set sensible defaults for development
|
||||||
|
|
||||||
|
backend/app/main.py:
|
||||||
|
- Create FastAPI app with title="Debate API", version="1.0.0"
|
||||||
|
- Disable docs in production (docs_url=None if production)
|
||||||
|
- Include v1 router at /api/v1 prefix
|
||||||
|
- Add basic health endpoint at root /health (outside versioned API)
|
||||||
|
|
||||||
|
backend/app/api/v1/router.py:
|
||||||
|
- Create APIRouter
|
||||||
|
- Include health endpoint router with prefix="/health", tags=["health"]
|
||||||
|
|
||||||
|
backend/app/api/v1/endpoints/health.py:
|
||||||
|
- GET /health returns {"status": "healthy"}
|
||||||
|
- GET /health/ready for readiness check (will add DB check in next plan)
|
||||||
|
|
||||||
|
All __init__.py files should be empty (or contain only necessary imports).
|
||||||
|
</action>
|
||||||
|
<verify>
|
||||||
|
Run: `cd /home/mikkel/repos/debate && source .venv/bin/activate && uvicorn backend.app.main:app --host 0.0.0.0 --port 8000 &`
|
||||||
|
Wait 2 seconds, then: `curl -s http://localhost:8000/health | grep -q healthy && echo "Health check passed"`
|
||||||
|
Kill the server.
|
||||||
|
</verify>
|
||||||
|
<done>
|
||||||
|
FastAPI application starts, health endpoint returns {"status": "healthy"}.
|
||||||
|
</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
</tasks>
|
||||||
|
|
||||||
|
<verification>
|
||||||
|
1. `uv pip list` shows all dependencies at correct versions
|
||||||
|
2. `ruff check backend/` passes with no errors
|
||||||
|
3. `uvicorn backend.app.main:app` starts without errors
|
||||||
|
4. `curl http://localhost:8000/health` returns 200 with {"status": "healthy"}
|
||||||
|
5. `curl http://localhost:8000/api/v1/health` returns 200
|
||||||
|
</verification>
|
||||||
|
|
||||||
|
<success_criteria>
|
||||||
|
- FastAPI backend structure exists following research architecture
|
||||||
|
- All dependencies installed via uv
|
||||||
|
- Health endpoint responds at /health
|
||||||
|
- Configuration loads from environment (or .env file)
|
||||||
|
- ruff passes on all code
|
||||||
|
</success_criteria>
|
||||||
|
|
||||||
|
<output>
|
||||||
|
After completion, create `.planning/phases/01-core-infrastructure-security/01-01-SUMMARY.md`
|
||||||
|
</output>
|
||||||
208
.planning/phases/01-core-infrastructure-security/01-02-PLAN.md
Normal file
208
.planning/phases/01-core-infrastructure-security/01-02-PLAN.md
Normal file
|
|
@ -0,0 +1,208 @@
|
||||||
|
---
|
||||||
|
phase: 01-core-infrastructure-security
|
||||||
|
plan: 02
|
||||||
|
type: execute
|
||||||
|
wave: 1
|
||||||
|
depends_on: []
|
||||||
|
files_modified:
|
||||||
|
- backend/app/db/__init__.py
|
||||||
|
- backend/app/db/base.py
|
||||||
|
- backend/app/db/session.py
|
||||||
|
- backend/app/db/models/__init__.py
|
||||||
|
- backend/app/db/models/build.py
|
||||||
|
- backend/alembic.ini
|
||||||
|
- backend/alembic/env.py
|
||||||
|
- backend/alembic/script.py.mako
|
||||||
|
- backend/alembic/versions/.gitkeep
|
||||||
|
- docker-compose.yml
|
||||||
|
autonomous: true
|
||||||
|
|
||||||
|
must_haves:
|
||||||
|
truths:
|
||||||
|
- "PostgreSQL container starts and accepts connections"
|
||||||
|
- "Alembic migrations run without errors"
|
||||||
|
- "Database session factory creates async sessions"
|
||||||
|
- "Build model persists to database"
|
||||||
|
artifacts:
|
||||||
|
- path: "backend/app/db/session.py"
|
||||||
|
provides: "Async database session factory"
|
||||||
|
contains: "async_sessionmaker"
|
||||||
|
- path: "backend/app/db/base.py"
|
||||||
|
provides: "SQLAlchemy declarative base"
|
||||||
|
contains: "DeclarativeBase"
|
||||||
|
- path: "backend/app/db/models/build.py"
|
||||||
|
provides: "Build tracking model"
|
||||||
|
contains: "class Build"
|
||||||
|
- path: "backend/alembic/env.py"
|
||||||
|
provides: "Alembic migration environment"
|
||||||
|
contains: "run_migrations_online"
|
||||||
|
- path: "docker-compose.yml"
|
||||||
|
provides: "PostgreSQL container configuration"
|
||||||
|
contains: "postgres"
|
||||||
|
key_links:
|
||||||
|
- from: "backend/app/db/session.py"
|
||||||
|
to: "backend/app/core/config.py"
|
||||||
|
via: "settings.database_url"
|
||||||
|
pattern: "settings\\.database_url"
|
||||||
|
- from: "backend/alembic/env.py"
|
||||||
|
to: "backend/app/db/base.py"
|
||||||
|
via: "target_metadata"
|
||||||
|
pattern: "target_metadata.*Base\\.metadata"
|
||||||
|
---
|
||||||
|
|
||||||
|
<objective>
|
||||||
|
Set up PostgreSQL database with async SQLAlchemy, Alembic migrations, and initial build tracking model.
|
||||||
|
|
||||||
|
Purpose: Establish the data persistence layer that tracks builds, users, and configurations.
|
||||||
|
Output: Running PostgreSQL instance, async session factory, and migration infrastructure with initial Build model.
|
||||||
|
</objective>
|
||||||
|
|
||||||
|
<execution_context>
|
||||||
|
@/home/mikkel/.claude/get-shit-done/workflows/execute-plan.md
|
||||||
|
@/home/mikkel/.claude/get-shit-done/templates/summary.md
|
||||||
|
</execution_context>
|
||||||
|
|
||||||
|
<context>
|
||||||
|
@.planning/PROJECT.md
|
||||||
|
@.planning/ROADMAP.md
|
||||||
|
@.planning/phases/01-core-infrastructure-security/01-RESEARCH.md (Pattern 1: Async Database Session Management, Code Examples: Database Migrations with Alembic)
|
||||||
|
</context>
|
||||||
|
|
||||||
|
<tasks>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 1: Set up PostgreSQL with Docker and async session factory</name>
|
||||||
|
<files>
|
||||||
|
docker-compose.yml
|
||||||
|
backend/app/db/__init__.py
|
||||||
|
backend/app/db/base.py
|
||||||
|
backend/app/db/session.py
|
||||||
|
</files>
|
||||||
|
<action>
|
||||||
|
Create docker-compose.yml:
|
||||||
|
- PostgreSQL 18 service (postgres:18-alpine image if available, or postgres:16-alpine)
|
||||||
|
- Container name: debate-postgres
|
||||||
|
- Environment: POSTGRES_USER=debate, POSTGRES_PASSWORD=debate_dev, POSTGRES_DB=debate
|
||||||
|
- Port: 5432:5432
|
||||||
|
- Volume: postgres_data for persistence
|
||||||
|
- Health check on pg_isready
|
||||||
|
|
||||||
|
backend/app/db/__init__.py:
|
||||||
|
- Empty or re-export key items
|
||||||
|
|
||||||
|
backend/app/db/base.py:
|
||||||
|
- Create SQLAlchemy 2.0 DeclarativeBase
|
||||||
|
- Import all models (for Alembic autogenerate)
|
||||||
|
- Pattern: `class Base(DeclarativeBase): pass`
|
||||||
|
|
||||||
|
backend/app/db/session.py:
|
||||||
|
- Import settings from core.config
|
||||||
|
- Create async engine with connection pooling (from research):
|
||||||
|
- pool_size=10
|
||||||
|
- max_overflow=20
|
||||||
|
- pool_timeout=30
|
||||||
|
- pool_recycle=1800
|
||||||
|
- pool_pre_ping=True
|
||||||
|
- Create async_sessionmaker factory
|
||||||
|
- Create `get_db` async generator dependency for FastAPI
|
||||||
|
|
||||||
|
Update .env.example (if not already done):
|
||||||
|
- DATABASE_URL=postgresql+asyncpg://debate:debate_dev@localhost:5432/debate
|
||||||
|
</action>
|
||||||
|
<verify>
|
||||||
|
Run: `cd /home/mikkel/repos/debate && docker compose up -d`
|
||||||
|
Wait 5 seconds for postgres to start.
|
||||||
|
Run: `docker compose exec postgres pg_isready -U debate`
|
||||||
|
Expected: "accepting connections"
|
||||||
|
</verify>
|
||||||
|
<done>
|
||||||
|
PostgreSQL container running, async session factory configured with connection pooling.
|
||||||
|
</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 2: Configure Alembic and create Build model</name>
|
||||||
|
<files>
|
||||||
|
backend/alembic.ini
|
||||||
|
backend/alembic/env.py
|
||||||
|
backend/alembic/script.py.mako
|
||||||
|
backend/alembic/versions/.gitkeep
|
||||||
|
backend/app/db/models/__init__.py
|
||||||
|
backend/app/db/models/build.py
|
||||||
|
</files>
|
||||||
|
<action>
|
||||||
|
Initialize Alembic in backend directory:
|
||||||
|
```bash
|
||||||
|
cd backend && alembic init alembic
|
||||||
|
```
|
||||||
|
|
||||||
|
Modify backend/alembic.ini:
|
||||||
|
- Set script_location = alembic
|
||||||
|
- Remove sqlalchemy.url (we'll set it from config)
|
||||||
|
|
||||||
|
Modify backend/alembic/env.py:
|
||||||
|
- Import asyncio, async_engine_from_config
|
||||||
|
- Import settings from app.core.config
|
||||||
|
- Import Base from app.db.base (this imports all models)
|
||||||
|
- Set sqlalchemy.url from settings.database_url
|
||||||
|
- Implement run_migrations_online() as async function (from research)
|
||||||
|
- Use asyncio.run() for async migrations
|
||||||
|
|
||||||
|
Create backend/app/db/models/__init__.py:
|
||||||
|
- Import all models for Alembic discovery
|
||||||
|
|
||||||
|
Create backend/app/db/models/build.py:
|
||||||
|
- Build model with fields:
|
||||||
|
- id: UUID primary key (use uuid.uuid4)
|
||||||
|
- config_hash: String(64), unique, indexed (SHA-256 of configuration)
|
||||||
|
- status: Enum (pending, building, completed, failed, cached)
|
||||||
|
- iso_path: Optional String (path to generated ISO)
|
||||||
|
- error_message: Optional Text (for failed builds)
|
||||||
|
- build_log: Optional Text (full build output)
|
||||||
|
- started_at: DateTime (nullable, set when build starts)
|
||||||
|
- completed_at: DateTime (nullable, set when build finishes)
|
||||||
|
- created_at: DateTime with server default now()
|
||||||
|
- updated_at: DateTime with onupdate
|
||||||
|
- Add index on status for queue queries
|
||||||
|
- Add index on config_hash for cache lookups
|
||||||
|
|
||||||
|
Update backend/app/db/base.py to import Build model.
|
||||||
|
|
||||||
|
Generate and run initial migration:
|
||||||
|
```bash
|
||||||
|
cd backend && alembic revision --autogenerate -m "Create build table"
|
||||||
|
cd backend && alembic upgrade head
|
||||||
|
```
|
||||||
|
</action>
|
||||||
|
<verify>
|
||||||
|
Run: `cd /home/mikkel/repos/debate/backend && alembic current`
|
||||||
|
Expected: Shows current migration head.
|
||||||
|
Run: `docker compose exec postgres psql -U debate -d debate -c "\\dt"`
|
||||||
|
Expected: Shows "builds" table.
|
||||||
|
</verify>
|
||||||
|
<done>
|
||||||
|
Alembic configured for async, Build model created with migration applied.
|
||||||
|
</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
</tasks>
|
||||||
|
|
||||||
|
<verification>
|
||||||
|
1. `docker compose ps` shows postgres container running and healthy
|
||||||
|
2. `cd backend && alembic current` shows migration applied
|
||||||
|
3. `docker compose exec postgres psql -U debate -d debate -c "SELECT * FROM builds LIMIT 1;"` succeeds (empty result OK)
|
||||||
|
4. `ruff check backend/app/db/` passes
|
||||||
|
5. Database has builds table with correct columns
|
||||||
|
</verification>
|
||||||
|
|
||||||
|
<success_criteria>
|
||||||
|
- PostgreSQL 18 running in Docker with health checks
|
||||||
|
- Async session factory with proper connection pooling
|
||||||
|
- Alembic configured for async migrations
|
||||||
|
- Build model exists with config_hash, status, timestamps
|
||||||
|
- Initial migration applied successfully
|
||||||
|
</success_criteria>
|
||||||
|
|
||||||
|
<output>
|
||||||
|
After completion, create `.planning/phases/01-core-infrastructure-security/01-02-SUMMARY.md`
|
||||||
|
</output>
|
||||||
189
.planning/phases/01-core-infrastructure-security/01-03-PLAN.md
Normal file
189
.planning/phases/01-core-infrastructure-security/01-03-PLAN.md
Normal file
|
|
@ -0,0 +1,189 @@
|
||||||
|
---
|
||||||
|
phase: 01-core-infrastructure-security
|
||||||
|
plan: 03
|
||||||
|
type: execute
|
||||||
|
wave: 2
|
||||||
|
depends_on: ["01-01", "01-02"]
|
||||||
|
files_modified:
|
||||||
|
- backend/app/main.py
|
||||||
|
- backend/app/core/security.py
|
||||||
|
- backend/app/api/deps.py
|
||||||
|
- backend/app/api/v1/endpoints/health.py
|
||||||
|
autonomous: true
|
||||||
|
|
||||||
|
must_haves:
|
||||||
|
truths:
|
||||||
|
- "Rate limiting blocks requests exceeding 100/minute"
|
||||||
|
- "CSRF tokens are validated on state-changing requests"
|
||||||
|
- "Database connectivity checked in health endpoint"
|
||||||
|
- "Security headers present in responses"
|
||||||
|
artifacts:
|
||||||
|
- path: "backend/app/core/security.py"
|
||||||
|
provides: "Rate limiting and CSRF configuration"
|
||||||
|
contains: "Limiter"
|
||||||
|
- path: "backend/app/api/deps.py"
|
||||||
|
provides: "FastAPI dependency injection"
|
||||||
|
contains: "get_db"
|
||||||
|
- path: "backend/app/main.py"
|
||||||
|
provides: "Security middleware stack"
|
||||||
|
contains: "TrustedHostMiddleware"
|
||||||
|
key_links:
|
||||||
|
- from: "backend/app/main.py"
|
||||||
|
to: "backend/app/core/security.py"
|
||||||
|
via: "limiter import"
|
||||||
|
pattern: "from app\\.core\\.security import"
|
||||||
|
- from: "backend/app/api/v1/endpoints/health.py"
|
||||||
|
to: "backend/app/api/deps.py"
|
||||||
|
via: "Depends(get_db)"
|
||||||
|
pattern: "Depends\\(get_db\\)"
|
||||||
|
---
|
||||||
|
|
||||||
|
<objective>
|
||||||
|
Implement security middleware stack with rate limiting, CSRF protection, and security headers.
|
||||||
|
|
||||||
|
Purpose: Protect the API from abuse and common web vulnerabilities (INFR-06, INFR-07).
|
||||||
|
Output: FastAPI application with layered security: rate limiting (100/min), CSRF protection, trusted hosts, CORS, and security headers.
|
||||||
|
</objective>
|
||||||
|
|
||||||
|
<execution_context>
|
||||||
|
@/home/mikkel/.claude/get-shit-done/workflows/execute-plan.md
|
||||||
|
@/home/mikkel/.claude/get-shit-done/templates/summary.md
|
||||||
|
</execution_context>
|
||||||
|
|
||||||
|
<context>
|
||||||
|
@.planning/PROJECT.md
|
||||||
|
@.planning/ROADMAP.md
|
||||||
|
@.planning/phases/01-core-infrastructure-security/01-RESEARCH.md (Pattern 3: FastAPI Security Middleware Stack)
|
||||||
|
@.planning/phases/01-core-infrastructure-security/01-01-SUMMARY.md
|
||||||
|
@.planning/phases/01-core-infrastructure-security/01-02-SUMMARY.md
|
||||||
|
</context>
|
||||||
|
|
||||||
|
<tasks>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 1: Configure rate limiting and CSRF protection</name>
|
||||||
|
<files>
|
||||||
|
backend/app/core/security.py
|
||||||
|
backend/app/api/deps.py
|
||||||
|
</files>
|
||||||
|
<action>
|
||||||
|
Create backend/app/core/security.py:
|
||||||
|
- Import and configure slowapi Limiter:
|
||||||
|
- key_func=get_remote_address
|
||||||
|
- default_limits=["100/minute"]
|
||||||
|
- storage_uri from settings (default to memory, Redis for production)
|
||||||
|
- Configure fastapi-csrf-protect CsrfProtect:
|
||||||
|
- Create CsrfSettings Pydantic model with:
|
||||||
|
- secret_key from settings
|
||||||
|
- cookie_samesite = "lax"
|
||||||
|
- cookie_secure = True (HTTPS only)
|
||||||
|
- cookie_httponly = True
|
||||||
|
- Implement @CsrfProtect.load_config decorator
|
||||||
|
|
||||||
|
Create backend/app/api/deps.py:
|
||||||
|
- Import get_db from app.db.session
|
||||||
|
- Re-export for cleaner imports in endpoints
|
||||||
|
- Create optional dependency for CSRF validation:
|
||||||
|
```python
|
||||||
|
async def validate_csrf(csrf_protect: CsrfProtect = Depends()):
|
||||||
|
await csrf_protect.validate_csrf_in_cookies()
|
||||||
|
```
|
||||||
|
</action>
|
||||||
|
<verify>
|
||||||
|
Run: `cd /home/mikkel/repos/debate && ruff check backend/app/core/security.py backend/app/api/deps.py`
|
||||||
|
Expected: No errors.
|
||||||
|
</verify>
|
||||||
|
<done>
|
||||||
|
Rate limiter configured at 100/minute, CSRF protection configured with secure cookie settings.
|
||||||
|
</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 2: Apply security middleware to FastAPI app and update health endpoint</name>
|
||||||
|
<files>
|
||||||
|
backend/app/main.py
|
||||||
|
backend/app/api/v1/endpoints/health.py
|
||||||
|
</files>
|
||||||
|
<action>
|
||||||
|
Update backend/app/main.py with middleware stack (order matters - first added = outermost):
|
||||||
|
|
||||||
|
1. TrustedHostMiddleware:
|
||||||
|
- allowed_hosts from settings.allowed_hosts
|
||||||
|
- Block requests with invalid Host header
|
||||||
|
|
||||||
|
2. CORSMiddleware:
|
||||||
|
- allow_origins from settings.allowed_origins
|
||||||
|
- allow_credentials=True
|
||||||
|
- allow_methods=["GET", "POST", "PUT", "DELETE", "OPTIONS"]
|
||||||
|
- allow_headers=["*"]
|
||||||
|
- max_age=600 (cache preflight for 10 min)
|
||||||
|
|
||||||
|
3. Rate limiting:
|
||||||
|
- app.state.limiter = limiter
|
||||||
|
- Add RateLimitExceeded exception handler
|
||||||
|
|
||||||
|
4. Custom middleware for security headers:
|
||||||
|
- Strict-Transport-Security: max-age=31536000; includeSubDomains
|
||||||
|
- X-Content-Type-Options: nosniff
|
||||||
|
- X-Frame-Options: DENY
|
||||||
|
- X-XSS-Protection: 1; mode=block
|
||||||
|
- Referrer-Policy: strict-origin-when-cross-origin
|
||||||
|
|
||||||
|
Update backend/app/api/v1/endpoints/health.py:
|
||||||
|
- Keep GET /health as simple {"status": "healthy"}
|
||||||
|
- Add GET /health/db that checks database connectivity:
|
||||||
|
- Depends on get_db session
|
||||||
|
- Execute "SELECT 1" query
|
||||||
|
- Return {"status": "healthy", "database": "connected"} on success
|
||||||
|
- Return {"status": "unhealthy", "database": "error", "detail": str(e)} on failure
|
||||||
|
- Add @limiter.exempt decorator to health endpoints (don't rate limit health checks)
|
||||||
|
</action>
|
||||||
|
<verify>
|
||||||
|
Start the server and test:
|
||||||
|
```bash
|
||||||
|
cd /home/mikkel/repos/debate
|
||||||
|
source .venv/bin/activate
|
||||||
|
uvicorn backend.app.main:app --host 0.0.0.0 --port 8000 &
|
||||||
|
sleep 2
|
||||||
|
|
||||||
|
# Test health endpoint
|
||||||
|
curl -s http://localhost:8000/health
|
||||||
|
|
||||||
|
# Test database health
|
||||||
|
curl -s http://localhost:8000/api/v1/health/db
|
||||||
|
|
||||||
|
# Test security headers
|
||||||
|
curl -sI http://localhost:8000/health | grep -E "(X-Content-Type|X-Frame|Strict-Transport)"
|
||||||
|
|
||||||
|
# Test rate limiting (make 110 requests)
|
||||||
|
for i in {1..110}; do curl -s -o /dev/null -w "%{http_code}\n" http://localhost:8000/health; done | sort | uniq -c
|
||||||
|
```
|
||||||
|
Expected: First 100+ requests return 200, some return 429 (rate limited).
|
||||||
|
Kill the server after testing.
|
||||||
|
</verify>
|
||||||
|
<done>
|
||||||
|
Security middleware applied, health endpoints check database, rate limiting blocks excess requests.
|
||||||
|
</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
</tasks>
|
||||||
|
|
||||||
|
<verification>
|
||||||
|
1. `curl -sI http://localhost:8000/health` includes security headers (X-Content-Type-Options, X-Frame-Options)
|
||||||
|
2. `curl http://localhost:8000/api/v1/health/db` returns database status
|
||||||
|
3. Rapid requests (>100/min) return 429 Too Many Requests
|
||||||
|
4. Invalid Host header returns 400 Bad Request
|
||||||
|
5. `ruff check backend/` passes
|
||||||
|
</verification>
|
||||||
|
|
||||||
|
<success_criteria>
|
||||||
|
- Rate limiting enforced at 100 requests/minute per IP (INFR-06)
|
||||||
|
- CSRF protection configured (INFR-07)
|
||||||
|
- Security headers present in all responses
|
||||||
|
- Health endpoints verify database connectivity
|
||||||
|
- All middleware applied in correct order
|
||||||
|
</success_criteria>
|
||||||
|
|
||||||
|
<output>
|
||||||
|
After completion, create `.planning/phases/01-core-infrastructure-security/01-03-SUMMARY.md`
|
||||||
|
</output>
|
||||||
298
.planning/phases/01-core-infrastructure-security/01-04-PLAN.md
Normal file
298
.planning/phases/01-core-infrastructure-security/01-04-PLAN.md
Normal file
|
|
@ -0,0 +1,298 @@
|
||||||
|
---
|
||||||
|
phase: 01-core-infrastructure-security
|
||||||
|
plan: 04
|
||||||
|
type: execute
|
||||||
|
wave: 2
|
||||||
|
depends_on: ["01-02"]
|
||||||
|
files_modified:
|
||||||
|
- Caddyfile
|
||||||
|
- docker-compose.yml
|
||||||
|
- scripts/backup-postgres.sh
|
||||||
|
- scripts/cron/postgres-backup
|
||||||
|
autonomous: true
|
||||||
|
|
||||||
|
must_haves:
|
||||||
|
truths:
|
||||||
|
- "HTTPS terminates at Caddy with valid certificate"
|
||||||
|
- "HTTP requests redirect to HTTPS"
|
||||||
|
- "Database backup script runs successfully"
|
||||||
|
- "Backup files are created with timestamps"
|
||||||
|
artifacts:
|
||||||
|
- path: "Caddyfile"
|
||||||
|
provides: "Caddy reverse proxy configuration"
|
||||||
|
contains: "reverse_proxy"
|
||||||
|
- path: "scripts/backup-postgres.sh"
|
||||||
|
provides: "Database backup automation"
|
||||||
|
contains: "pg_dump"
|
||||||
|
- path: "docker-compose.yml"
|
||||||
|
provides: "Caddy container configuration"
|
||||||
|
contains: "caddy"
|
||||||
|
key_links:
|
||||||
|
- from: "Caddyfile"
|
||||||
|
to: "backend/app/main.py"
|
||||||
|
via: "reverse_proxy localhost:8000"
|
||||||
|
pattern: "reverse_proxy.*localhost:8000"
|
||||||
|
- from: "scripts/backup-postgres.sh"
|
||||||
|
to: "docker-compose.yml"
|
||||||
|
via: "debate-postgres container"
|
||||||
|
pattern: "docker.*exec.*postgres"
|
||||||
|
---
|
||||||
|
|
||||||
|
<objective>
|
||||||
|
Configure Caddy for HTTPS termination and set up PostgreSQL daily backup automation.
|
||||||
|
|
||||||
|
Purpose: Ensure all traffic is encrypted (INFR-05) and user data is backed up daily (INFR-04).
|
||||||
|
Output: Caddy reverse proxy with automatic HTTPS, PostgreSQL backup script with 30-day retention.
|
||||||
|
</objective>
|
||||||
|
|
||||||
|
<execution_context>
|
||||||
|
@/home/mikkel/.claude/get-shit-done/workflows/execute-plan.md
|
||||||
|
@/home/mikkel/.claude/get-shit-done/templates/summary.md
|
||||||
|
</execution_context>
|
||||||
|
|
||||||
|
<context>
|
||||||
|
@.planning/PROJECT.md
|
||||||
|
@.planning/ROADMAP.md
|
||||||
|
@.planning/phases/01-core-infrastructure-security/01-RESEARCH.md (Pattern 2: Caddy Automatic HTTPS, Code Examples: PostgreSQL Backup Script)
|
||||||
|
@.planning/phases/01-core-infrastructure-security/01-CONTEXT.md (Backup & Recovery decisions)
|
||||||
|
@.planning/phases/01-core-infrastructure-security/01-02-SUMMARY.md
|
||||||
|
</context>
|
||||||
|
|
||||||
|
<tasks>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 1: Configure Caddy reverse proxy with HTTPS</name>
|
||||||
|
<files>
|
||||||
|
Caddyfile
|
||||||
|
docker-compose.yml
|
||||||
|
</files>
|
||||||
|
<action>
|
||||||
|
Create Caddyfile in project root:
|
||||||
|
```caddyfile
|
||||||
|
{
|
||||||
|
# Admin API for programmatic route management (future use for ISO downloads)
|
||||||
|
admin localhost:2019
|
||||||
|
|
||||||
|
# For local development, use internal CA
|
||||||
|
# In production, Caddy auto-obtains Let's Encrypt certs
|
||||||
|
}
|
||||||
|
|
||||||
|
# Development configuration (localhost)
|
||||||
|
:443 {
|
||||||
|
tls internal # Self-signed for local dev
|
||||||
|
|
||||||
|
# Reverse proxy to FastAPI
|
||||||
|
reverse_proxy localhost:8000 {
|
||||||
|
health_uri /health
|
||||||
|
health_interval 10s
|
||||||
|
health_timeout 5s
|
||||||
|
}
|
||||||
|
|
||||||
|
# Security headers (supplement FastAPI's headers)
|
||||||
|
header {
|
||||||
|
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
|
||||||
|
X-Content-Type-Options "nosniff"
|
||||||
|
X-Frame-Options "DENY"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Access logging
|
||||||
|
log {
|
||||||
|
output file /var/log/caddy/access.log {
|
||||||
|
roll_size 100mb
|
||||||
|
roll_keep 10
|
||||||
|
}
|
||||||
|
format json
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# HTTP to HTTPS redirect
|
||||||
|
:80 {
|
||||||
|
redir https://{host}{uri} permanent
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Update docker-compose.yml to add Caddy service:
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
caddy:
|
||||||
|
image: caddy:2-alpine
|
||||||
|
container_name: debate-caddy
|
||||||
|
restart: unless-stopped
|
||||||
|
ports:
|
||||||
|
- "80:80"
|
||||||
|
- "443:443"
|
||||||
|
- "2019:2019" # Admin API (bind to localhost in production)
|
||||||
|
volumes:
|
||||||
|
- ./Caddyfile:/etc/caddy/Caddyfile:ro
|
||||||
|
- caddy_data:/data
|
||||||
|
- caddy_config:/config
|
||||||
|
- caddy_logs:/var/log/caddy
|
||||||
|
network_mode: host # To reach localhost:8000
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
caddy_data:
|
||||||
|
caddy_config:
|
||||||
|
caddy_logs:
|
||||||
|
```
|
||||||
|
|
||||||
|
Note: For development, Caddy uses self-signed certs (`tls internal`).
|
||||||
|
For production, replace `:443` with actual domain and remove `tls internal`.
|
||||||
|
</action>
|
||||||
|
<verify>
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cd /home/mikkel/repos/debate
|
||||||
|
docker compose up -d caddy
|
||||||
|
sleep 3
|
||||||
|
# Test HTTPS (allow self-signed cert)
|
||||||
|
curl -sk https://localhost/health
|
||||||
|
# Test HTTP redirect
|
||||||
|
curl -sI http://localhost | grep -i location
|
||||||
|
```
|
||||||
|
Expected: HTTPS returns health response, HTTP redirects to HTTPS.
|
||||||
|
</verify>
|
||||||
|
<done>
|
||||||
|
Caddy running with HTTPS termination, HTTP redirects to HTTPS.
|
||||||
|
</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 2: Create PostgreSQL backup script with retention</name>
|
||||||
|
<files>
|
||||||
|
scripts/backup-postgres.sh
|
||||||
|
scripts/cron/postgres-backup
|
||||||
|
</files>
|
||||||
|
<action>
|
||||||
|
Create scripts/backup-postgres.sh:
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# PostgreSQL backup script for Debate platform
|
||||||
|
# Runs daily, keeps 30 days of backups
|
||||||
|
# Verifies backup integrity after creation
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
BACKUP_DIR="${BACKUP_DIR:-/var/backups/debate/postgres}"
|
||||||
|
RETENTION_DAYS="${RETENTION_DAYS:-30}"
|
||||||
|
CONTAINER_NAME="${CONTAINER_NAME:-debate-postgres}"
|
||||||
|
DB_NAME="${DB_NAME:-debate}"
|
||||||
|
DB_USER="${DB_USER:-debate}"
|
||||||
|
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
|
||||||
|
BACKUP_FILE="${BACKUP_DIR}/${DB_NAME}_${TIMESTAMP}.dump"
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
log() {
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Create backup directory
|
||||||
|
mkdir -p "$BACKUP_DIR"
|
||||||
|
|
||||||
|
log "Starting backup of database: $DB_NAME"
|
||||||
|
|
||||||
|
# Create backup using pg_dump custom format (-Fc)
|
||||||
|
# Custom format is compressed and allows selective restore
|
||||||
|
docker exec "$CONTAINER_NAME" pg_dump \
|
||||||
|
-U "$DB_USER" \
|
||||||
|
-Fc \
|
||||||
|
-b \
|
||||||
|
-v \
|
||||||
|
"$DB_NAME" > "$BACKUP_FILE" 2>&1
|
||||||
|
|
||||||
|
log "Backup created: $BACKUP_FILE"
|
||||||
|
|
||||||
|
# Verify backup integrity
|
||||||
|
log "Verifying backup integrity..."
|
||||||
|
docker exec -i "$CONTAINER_NAME" pg_restore \
|
||||||
|
--list "$BACKUP_FILE" > /dev/null 2>&1 || {
|
||||||
|
log "ERROR: Backup verification failed!"
|
||||||
|
rm -f "$BACKUP_FILE"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Get backup size
|
||||||
|
BACKUP_SIZE=$(du -h "$BACKUP_FILE" | cut -f1)
|
||||||
|
log "Backup size: $BACKUP_SIZE"
|
||||||
|
|
||||||
|
# Compress if not already (pg_dump -Fc includes compression, but this adds more)
|
||||||
|
gzip -f "$BACKUP_FILE"
|
||||||
|
log "Compressed: ${BACKUP_FILE}.gz"
|
||||||
|
|
||||||
|
# Clean up old backups
|
||||||
|
log "Removing backups older than $RETENTION_DAYS days..."
|
||||||
|
find "$BACKUP_DIR" -name "${DB_NAME}_*.dump.gz" -mtime +$RETENTION_DAYS -delete
|
||||||
|
REMAINING=$(find "$BACKUP_DIR" -name "${DB_NAME}_*.dump.gz" | wc -l)
|
||||||
|
log "Remaining backups: $REMAINING"
|
||||||
|
|
||||||
|
# Weekly restore test (every Monday)
|
||||||
|
if [ "$(date +%u)" -eq 1 ]; then
|
||||||
|
log "Running weekly restore test..."
|
||||||
|
TEST_DB="${DB_NAME}_backup_test"
|
||||||
|
|
||||||
|
# Create test database
|
||||||
|
docker exec "$CONTAINER_NAME" createdb -U "$DB_USER" "$TEST_DB" 2>/dev/null || true
|
||||||
|
|
||||||
|
# Restore to test database
|
||||||
|
gunzip -c "${BACKUP_FILE}.gz" | docker exec -i "$CONTAINER_NAME" pg_restore \
|
||||||
|
-U "$DB_USER" \
|
||||||
|
-d "$TEST_DB" \
|
||||||
|
--clean \
|
||||||
|
--if-exists 2>&1 || true
|
||||||
|
|
||||||
|
# Drop test database
|
||||||
|
docker exec "$CONTAINER_NAME" dropdb -U "$DB_USER" "$TEST_DB" 2>/dev/null || true
|
||||||
|
|
||||||
|
log "Weekly restore test completed"
|
||||||
|
fi
|
||||||
|
|
||||||
|
log "Backup completed successfully"
|
||||||
|
```
|
||||||
|
|
||||||
|
Make executable: `chmod +x scripts/backup-postgres.sh`
|
||||||
|
|
||||||
|
Create scripts/cron/postgres-backup:
|
||||||
|
```
|
||||||
|
# PostgreSQL daily backup at 2 AM
|
||||||
|
0 2 * * * /home/mikkel/repos/debate/scripts/backup-postgres.sh >> /var/log/debate/postgres-backup.log 2>&1
|
||||||
|
```
|
||||||
|
|
||||||
|
Create .gitignore entry for backup files (they shouldn't be in repo).
|
||||||
|
</action>
|
||||||
|
<verify>
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cd /home/mikkel/repos/debate
|
||||||
|
mkdir -p /tmp/debate-backups
|
||||||
|
BACKUP_DIR=/tmp/debate-backups ./scripts/backup-postgres.sh
|
||||||
|
ls -la /tmp/debate-backups/
|
||||||
|
```
|
||||||
|
Expected: Backup file created with .dump.gz extension.
|
||||||
|
</verify>
|
||||||
|
<done>
|
||||||
|
Backup script creates compressed PostgreSQL dumps, verifies integrity, maintains 30-day retention.
|
||||||
|
</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
</tasks>
|
||||||
|
|
||||||
|
<verification>
|
||||||
|
1. `curl -sk https://localhost/health` returns healthy through Caddy
|
||||||
|
2. `curl -sI http://localhost | grep -i location` shows HTTPS redirect
|
||||||
|
3. `./scripts/backup-postgres.sh` creates backup successfully
|
||||||
|
4. Backup file is compressed and verifiable
|
||||||
|
5. Old backups (>30 days) would be deleted by retention logic
|
||||||
|
</verification>
|
||||||
|
|
||||||
|
<success_criteria>
|
||||||
|
- All traffic flows through HTTPS via Caddy (INFR-05)
|
||||||
|
- HTTP requests redirect to HTTPS
|
||||||
|
- Caddy health checks FastAPI backend
|
||||||
|
- Daily backup script exists with 30-day retention (INFR-04)
|
||||||
|
- Backup integrity verified after creation
|
||||||
|
- Weekly restore test configured
|
||||||
|
</success_criteria>
|
||||||
|
|
||||||
|
<output>
|
||||||
|
After completion, create `.planning/phases/01-core-infrastructure-security/01-04-SUMMARY.md`
|
||||||
|
</output>
|
||||||
743
.planning/phases/01-core-infrastructure-security/01-05-PLAN.md
Normal file
743
.planning/phases/01-core-infrastructure-security/01-05-PLAN.md
Normal file
|
|
@ -0,0 +1,743 @@
|
||||||
|
---
|
||||||
|
phase: 01-core-infrastructure-security
|
||||||
|
plan: 05
|
||||||
|
type: execute
|
||||||
|
wave: 3
|
||||||
|
depends_on: ["01-01", "01-02"]
|
||||||
|
files_modified:
|
||||||
|
- backend/app/services/__init__.py
|
||||||
|
- backend/app/services/sandbox.py
|
||||||
|
- backend/app/services/deterministic.py
|
||||||
|
- backend/app/services/build.py
|
||||||
|
- scripts/setup-sandbox.sh
|
||||||
|
- tests/test_deterministic.py
|
||||||
|
autonomous: true
|
||||||
|
|
||||||
|
must_haves:
|
||||||
|
truths:
|
||||||
|
- "Sandbox creates isolated systemd-nspawn container"
|
||||||
|
- "Build commands execute with no network access"
|
||||||
|
- "Same configuration produces identical hash"
|
||||||
|
- "SOURCE_DATE_EPOCH is set for all builds"
|
||||||
|
artifacts:
|
||||||
|
- path: "backend/app/services/sandbox.py"
|
||||||
|
provides: "systemd-nspawn sandbox management"
|
||||||
|
contains: "systemd-nspawn"
|
||||||
|
- path: "backend/app/services/deterministic.py"
|
||||||
|
provides: "Deterministic build configuration"
|
||||||
|
contains: "SOURCE_DATE_EPOCH"
|
||||||
|
- path: "backend/app/services/build.py"
|
||||||
|
provides: "Build orchestration service"
|
||||||
|
contains: "class BuildService"
|
||||||
|
- path: "scripts/setup-sandbox.sh"
|
||||||
|
provides: "Sandbox environment initialization"
|
||||||
|
contains: "pacstrap"
|
||||||
|
key_links:
|
||||||
|
- from: "backend/app/services/build.py"
|
||||||
|
to: "backend/app/services/sandbox.py"
|
||||||
|
via: "BuildSandbox import"
|
||||||
|
pattern: "from.*sandbox import"
|
||||||
|
- from: "backend/app/services/build.py"
|
||||||
|
to: "backend/app/services/deterministic.py"
|
||||||
|
via: "DeterministicBuildConfig import"
|
||||||
|
pattern: "from.*deterministic import"
|
||||||
|
---
|
||||||
|
|
||||||
|
<objective>
|
||||||
|
Implement systemd-nspawn build sandbox with deterministic configuration for reproducible ISO builds.
|
||||||
|
|
||||||
|
Purpose: Ensure ISO builds are isolated from host (ISO-04) and produce identical output for same input (determinism for caching).
|
||||||
|
Output: Sandbox service that creates isolated containers, deterministic build configuration with hash generation.
|
||||||
|
</objective>
|
||||||
|
|
||||||
|
<execution_context>
|
||||||
|
@/home/mikkel/.claude/get-shit-done/workflows/execute-plan.md
|
||||||
|
@/home/mikkel/.claude/get-shit-done/templates/summary.md
|
||||||
|
</execution_context>
|
||||||
|
|
||||||
|
<context>
|
||||||
|
@.planning/PROJECT.md
|
||||||
|
@.planning/ROADMAP.md
|
||||||
|
@.planning/phases/01-core-infrastructure-security/01-RESEARCH.md (Pattern 4: systemd-nspawn Build Sandbox, Pattern 5: Deterministic Build Configuration)
|
||||||
|
@.planning/phases/01-core-infrastructure-security/01-CONTEXT.md (Sandbox Strictness, Determinism Approach decisions)
|
||||||
|
@.planning/phases/01-core-infrastructure-security/01-01-SUMMARY.md
|
||||||
|
@.planning/phases/01-core-infrastructure-security/01-02-SUMMARY.md
|
||||||
|
</context>
|
||||||
|
|
||||||
|
<tasks>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 1: Create sandbox setup script and sandbox service</name>
|
||||||
|
<files>
|
||||||
|
scripts/setup-sandbox.sh
|
||||||
|
backend/app/services/__init__.py
|
||||||
|
backend/app/services/sandbox.py
|
||||||
|
</files>
|
||||||
|
<action>
|
||||||
|
Create scripts/setup-sandbox.sh:
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# Initialize sandbox environment for ISO builds
|
||||||
|
# Run once to create base container image
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
SANDBOX_ROOT="${SANDBOX_ROOT:-/var/lib/debate/sandbox}"
|
||||||
|
SANDBOX_BASE="${SANDBOX_ROOT}/base"
|
||||||
|
ALLOWED_MIRRORS=(
|
||||||
|
"https://geo.mirror.pkgbuild.com/\$repo/os/\$arch"
|
||||||
|
"https://mirror.cachyos.org/repo/\$arch/\$repo"
|
||||||
|
)
|
||||||
|
|
||||||
|
log() {
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check prerequisites
|
||||||
|
if ! command -v pacstrap &> /dev/null; then
|
||||||
|
log "ERROR: pacstrap not found. Install arch-install-scripts package."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! command -v systemd-nspawn &> /dev/null; then
|
||||||
|
log "ERROR: systemd-nspawn not found. Install systemd-container package."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create sandbox directories
|
||||||
|
log "Creating sandbox directories..."
|
||||||
|
mkdir -p "$SANDBOX_ROOT"/{base,builds,cache}
|
||||||
|
|
||||||
|
# Bootstrap base Arch environment
|
||||||
|
if [ ! -d "$SANDBOX_BASE/usr" ]; then
|
||||||
|
log "Bootstrapping base Arch Linux environment..."
|
||||||
|
pacstrap -c -G -M "$SANDBOX_BASE" base archiso
|
||||||
|
|
||||||
|
# Configure mirrors (whitelist only)
|
||||||
|
log "Configuring mirrors..."
|
||||||
|
MIRRORLIST="$SANDBOX_BASE/etc/pacman.d/mirrorlist"
|
||||||
|
: > "$MIRRORLIST"
|
||||||
|
for mirror in "${ALLOWED_MIRRORS[@]}"; do
|
||||||
|
echo "Server = $mirror" >> "$MIRRORLIST"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Set fixed locale for determinism
|
||||||
|
echo "en_US.UTF-8 UTF-8" > "$SANDBOX_BASE/etc/locale.gen"
|
||||||
|
systemd-nspawn -D "$SANDBOX_BASE" locale-gen
|
||||||
|
|
||||||
|
log "Base environment created at $SANDBOX_BASE"
|
||||||
|
else
|
||||||
|
log "Base environment already exists at $SANDBOX_BASE"
|
||||||
|
fi
|
||||||
|
|
||||||
|
log "Sandbox setup complete"
|
||||||
|
```
|
||||||
|
|
||||||
|
Create backend/app/services/__init__.py:
|
||||||
|
- Empty or import key services
|
||||||
|
|
||||||
|
Create backend/app/services/sandbox.py:
|
||||||
|
```python
|
||||||
|
"""
|
||||||
|
systemd-nspawn sandbox for isolated ISO builds.
|
||||||
|
|
||||||
|
Security measures:
|
||||||
|
- --private-network: No network access (packages pre-cached in base)
|
||||||
|
- --read-only: Immutable root filesystem
|
||||||
|
- --tmpfs: Writable temp directories only
|
||||||
|
- --capability: Minimal capabilities for mkarchiso
|
||||||
|
- Resource limits: 8GB RAM, 4 cores (from CONTEXT.md)
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from app.core.config import settings
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class SandboxConfig:
|
||||||
|
"""Configuration for sandbox execution."""
|
||||||
|
memory_limit: str = "8G"
|
||||||
|
cpu_quota: str = "400%" # 4 cores
|
||||||
|
timeout_seconds: int = 1200 # 20 minutes (with 15min warning)
|
||||||
|
warning_seconds: int = 900 # 15 minutes
|
||||||
|
|
||||||
|
|
||||||
|
class BuildSandbox:
|
||||||
|
"""Manages systemd-nspawn sandboxed build environments."""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
sandbox_root: Path = None,
|
||||||
|
config: SandboxConfig = None
|
||||||
|
):
|
||||||
|
self.sandbox_root = sandbox_root or Path(settings.sandbox_root)
|
||||||
|
self.base_path = self.sandbox_root / "base"
|
||||||
|
self.builds_path = self.sandbox_root / "builds"
|
||||||
|
self.config = config or SandboxConfig()
|
||||||
|
|
||||||
|
async def create_build_container(self, build_id: str) -> Path:
|
||||||
|
"""
|
||||||
|
Create isolated container for a specific build.
|
||||||
|
Uses overlay filesystem on base for efficiency.
|
||||||
|
"""
|
||||||
|
container_path = self.builds_path / build_id
|
||||||
|
if container_path.exists():
|
||||||
|
shutil.rmtree(container_path)
|
||||||
|
container_path.mkdir(parents=True)
|
||||||
|
|
||||||
|
# Copy base (in production, use overlayfs for efficiency)
|
||||||
|
# For now, simple copy is acceptable
|
||||||
|
proc = await asyncio.create_subprocess_exec(
|
||||||
|
"cp", "-a", str(self.base_path) + "/.", str(container_path),
|
||||||
|
stdout=asyncio.subprocess.PIPE,
|
||||||
|
stderr=asyncio.subprocess.PIPE
|
||||||
|
)
|
||||||
|
await proc.wait()
|
||||||
|
|
||||||
|
return container_path
|
||||||
|
|
||||||
|
async def run_build(
|
||||||
|
self,
|
||||||
|
container_path: Path,
|
||||||
|
profile_path: Path,
|
||||||
|
output_path: Path,
|
||||||
|
source_date_epoch: int
|
||||||
|
) -> tuple[int, str, str]:
|
||||||
|
"""
|
||||||
|
Execute archiso build in sandboxed container.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (return_code, stdout, stderr)
|
||||||
|
"""
|
||||||
|
output_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
nspawn_cmd = [
|
||||||
|
"systemd-nspawn",
|
||||||
|
f"--directory={container_path}",
|
||||||
|
"--private-network", # No network access
|
||||||
|
"--read-only", # Immutable root
|
||||||
|
"--tmpfs=/tmp:mode=1777",
|
||||||
|
"--tmpfs=/var/tmp:mode=1777",
|
||||||
|
f"--bind={profile_path}:/build/profile:ro",
|
||||||
|
f"--bind={output_path}:/build/output",
|
||||||
|
f"--setenv=SOURCE_DATE_EPOCH={source_date_epoch}",
|
||||||
|
"--setenv=LC_ALL=C",
|
||||||
|
"--setenv=TZ=UTC",
|
||||||
|
"--capability=CAP_SYS_ADMIN", # Required for mkarchiso
|
||||||
|
"--console=pipe",
|
||||||
|
"--quiet",
|
||||||
|
"--",
|
||||||
|
"mkarchiso",
|
||||||
|
"-v",
|
||||||
|
"-r", # Remove work directory after build
|
||||||
|
"-w", "/tmp/archiso-work",
|
||||||
|
"-o", "/build/output",
|
||||||
|
"/build/profile"
|
||||||
|
]
|
||||||
|
|
||||||
|
proc = await asyncio.create_subprocess_exec(
|
||||||
|
*nspawn_cmd,
|
||||||
|
stdout=asyncio.subprocess.PIPE,
|
||||||
|
stderr=asyncio.subprocess.PIPE
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
stdout, stderr = await asyncio.wait_for(
|
||||||
|
proc.communicate(),
|
||||||
|
timeout=self.config.timeout_seconds
|
||||||
|
)
|
||||||
|
return proc.returncode, stdout.decode(), stderr.decode()
|
||||||
|
except asyncio.TimeoutError:
|
||||||
|
proc.kill()
|
||||||
|
return -1, "", f"Build timed out after {self.config.timeout_seconds} seconds"
|
||||||
|
|
||||||
|
async def cleanup_container(self, container_path: Path):
|
||||||
|
"""Remove container after build."""
|
||||||
|
if container_path.exists():
|
||||||
|
shutil.rmtree(container_path)
|
||||||
|
```
|
||||||
|
</action>
|
||||||
|
<verify>
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cd /home/mikkel/repos/debate
|
||||||
|
ruff check backend/app/services/sandbox.py
|
||||||
|
python -c "from backend.app.services.sandbox import BuildSandbox, SandboxConfig; print('Import OK')"
|
||||||
|
```
|
||||||
|
Expected: No ruff errors, import succeeds.
|
||||||
|
</verify>
|
||||||
|
<done>
|
||||||
|
Sandbox service creates isolated containers with network isolation, resource limits, and deterministic environment.
|
||||||
|
</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 2: Create deterministic build configuration service</name>
|
||||||
|
<files>
|
||||||
|
backend/app/services/deterministic.py
|
||||||
|
tests/test_deterministic.py
|
||||||
|
</files>
|
||||||
|
<action>
|
||||||
|
Create backend/app/services/deterministic.py:
|
||||||
|
```python
|
||||||
|
"""
|
||||||
|
Deterministic build configuration for reproducible ISOs.
|
||||||
|
|
||||||
|
Critical: Same configuration must produce identical ISO hash.
|
||||||
|
This is required for caching to work correctly.
|
||||||
|
|
||||||
|
Determinism factors:
|
||||||
|
- SOURCE_DATE_EPOCH: Fixed timestamps in all generated files
|
||||||
|
- LC_ALL=C: Fixed locale for sorting
|
||||||
|
- TZ=UTC: Fixed timezone
|
||||||
|
- Sorted inputs: Packages, files always in consistent order
|
||||||
|
- Fixed compression: Consistent squashfs settings
|
||||||
|
"""
|
||||||
|
|
||||||
|
import hashlib
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any
|
||||||
|
from dataclasses import dataclass
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class OverlayFile:
|
||||||
|
"""A file to be included in the overlay."""
|
||||||
|
path: str # Absolute path in ISO (e.g., /etc/skel/.bashrc)
|
||||||
|
content: str
|
||||||
|
mode: str = "0644"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class BuildConfiguration:
|
||||||
|
"""Normalized build configuration for deterministic hashing."""
|
||||||
|
packages: list[str]
|
||||||
|
overlays: list[dict[str, Any]]
|
||||||
|
locale: str = "en_US.UTF-8"
|
||||||
|
timezone: str = "UTC"
|
||||||
|
|
||||||
|
|
||||||
|
class DeterministicBuildConfig:
|
||||||
|
"""Ensures reproducible ISO builds."""
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def compute_config_hash(config: dict[str, Any]) -> str:
|
||||||
|
"""
|
||||||
|
Generate deterministic hash of build configuration.
|
||||||
|
|
||||||
|
Process:
|
||||||
|
1. Normalize all inputs (sort lists, normalize paths)
|
||||||
|
2. Hash file contents (not file objects)
|
||||||
|
3. Use consistent JSON serialization
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
SHA-256 hash of normalized configuration
|
||||||
|
"""
|
||||||
|
# Normalize packages (sorted, deduplicated)
|
||||||
|
packages = sorted(set(config.get("packages", [])))
|
||||||
|
|
||||||
|
# Normalize overlays
|
||||||
|
normalized_overlays = []
|
||||||
|
for overlay in sorted(config.get("overlays", []), key=lambda x: x.get("name", "")):
|
||||||
|
normalized_files = []
|
||||||
|
for f in sorted(overlay.get("files", []), key=lambda x: x.get("path", "")):
|
||||||
|
content = f.get("content", "")
|
||||||
|
content_hash = hashlib.sha256(content.encode()).hexdigest()
|
||||||
|
normalized_files.append({
|
||||||
|
"path": f.get("path", "").strip(),
|
||||||
|
"content_hash": content_hash,
|
||||||
|
"mode": f.get("mode", "0644")
|
||||||
|
})
|
||||||
|
normalized_overlays.append({
|
||||||
|
"name": overlay.get("name", "").strip(),
|
||||||
|
"files": normalized_files
|
||||||
|
})
|
||||||
|
|
||||||
|
# Build normalized config
|
||||||
|
normalized = {
|
||||||
|
"packages": packages,
|
||||||
|
"overlays": normalized_overlays,
|
||||||
|
"locale": config.get("locale", "en_US.UTF-8"),
|
||||||
|
"timezone": config.get("timezone", "UTC")
|
||||||
|
}
|
||||||
|
|
||||||
|
# JSON with sorted keys for determinism
|
||||||
|
config_json = json.dumps(normalized, sort_keys=True, separators=(',', ':'))
|
||||||
|
return hashlib.sha256(config_json.encode()).hexdigest()
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_source_date_epoch(config_hash: str) -> int:
|
||||||
|
"""
|
||||||
|
Generate deterministic timestamp from config hash.
|
||||||
|
|
||||||
|
Using hash-derived timestamp ensures:
|
||||||
|
- Same config always gets same timestamp
|
||||||
|
- Different configs get different timestamps
|
||||||
|
- No dependency on wall clock time
|
||||||
|
|
||||||
|
The timestamp is within a reasonable range (2020-2030).
|
||||||
|
"""
|
||||||
|
# Use first 8 bytes of hash to generate timestamp
|
||||||
|
hash_int = int(config_hash[:16], 16)
|
||||||
|
# Map to range: Jan 1, 2020 to Dec 31, 2030
|
||||||
|
min_epoch = 1577836800 # 2020-01-01
|
||||||
|
max_epoch = 1924991999 # 2030-12-31
|
||||||
|
return min_epoch + (hash_int % (max_epoch - min_epoch))
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def create_archiso_profile(
|
||||||
|
config: dict[str, Any],
|
||||||
|
profile_path: Path,
|
||||||
|
source_date_epoch: int
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Generate archiso profile with deterministic settings.
|
||||||
|
|
||||||
|
Creates:
|
||||||
|
- packages.x86_64: Sorted package list
|
||||||
|
- profiledef.sh: Build configuration
|
||||||
|
- pacman.conf: Package manager config
|
||||||
|
- airootfs/: Overlay files
|
||||||
|
"""
|
||||||
|
profile_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
# packages.x86_64 (sorted for determinism)
|
||||||
|
packages = sorted(set(config.get("packages", ["base", "linux"])))
|
||||||
|
packages_file = profile_path / "packages.x86_64"
|
||||||
|
packages_file.write_text("\n".join(packages) + "\n")
|
||||||
|
|
||||||
|
# profiledef.sh
|
||||||
|
profiledef = profile_path / "profiledef.sh"
|
||||||
|
iso_date = f"$(date --date=@{source_date_epoch} +%Y%m)"
|
||||||
|
iso_version = f"$(date --date=@{source_date_epoch} +%Y.%m.%d)"
|
||||||
|
|
||||||
|
profiledef.write_text(f'''#!/usr/bin/env bash
|
||||||
|
# Deterministic archiso profile
|
||||||
|
# Generated for Debate platform
|
||||||
|
|
||||||
|
iso_name="debate-custom"
|
||||||
|
iso_label="DEBATE_{iso_date}"
|
||||||
|
iso_publisher="Debate Platform <https://debate.example.com>"
|
||||||
|
iso_application="Debate Custom Linux"
|
||||||
|
iso_version="{iso_version}"
|
||||||
|
install_dir="arch"
|
||||||
|
bootmodes=('bios.syslinux.mbr' 'bios.syslinux.eltorito' 'uefi-x64.systemd-boot.esp' 'uefi-x64.systemd-boot.eltorito')
|
||||||
|
arch="x86_64"
|
||||||
|
pacman_conf="pacman.conf"
|
||||||
|
airootfs_image_type="squashfs"
|
||||||
|
airootfs_image_tool_options=('-comp' 'xz' '-Xbcj' 'x86' '-b' '1M' '-Xdict-size' '1M')
|
||||||
|
|
||||||
|
file_permissions=(
|
||||||
|
["/etc/shadow"]="0:0:0400"
|
||||||
|
["/root"]="0:0:750"
|
||||||
|
["/etc/gshadow"]="0:0:0400"
|
||||||
|
)
|
||||||
|
''')
|
||||||
|
|
||||||
|
# pacman.conf
|
||||||
|
pacman_conf = profile_path / "pacman.conf"
|
||||||
|
pacman_conf.write_text('''[options]
|
||||||
|
Architecture = auto
|
||||||
|
CheckSpace
|
||||||
|
SigLevel = Required DatabaseOptional
|
||||||
|
LocalFileSigLevel = Optional
|
||||||
|
|
||||||
|
[core]
|
||||||
|
Include = /etc/pacman.d/mirrorlist
|
||||||
|
|
||||||
|
[extra]
|
||||||
|
Include = /etc/pacman.d/mirrorlist
|
||||||
|
''')
|
||||||
|
|
||||||
|
# airootfs structure with overlay files
|
||||||
|
airootfs = profile_path / "airootfs"
|
||||||
|
airootfs.mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
for overlay in config.get("overlays", []):
|
||||||
|
for file_config in overlay.get("files", []):
|
||||||
|
file_path = airootfs / file_config["path"].lstrip("/")
|
||||||
|
file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
file_path.write_text(file_config["content"])
|
||||||
|
if "mode" in file_config:
|
||||||
|
file_path.chmod(int(file_config["mode"], 8))
|
||||||
|
```
|
||||||
|
|
||||||
|
Create tests/test_deterministic.py:
|
||||||
|
```python
|
||||||
|
"""Tests for deterministic build configuration."""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
from backend.app.services.deterministic import DeterministicBuildConfig
|
||||||
|
|
||||||
|
|
||||||
|
class TestDeterministicBuildConfig:
|
||||||
|
"""Test that same inputs produce same outputs."""
|
||||||
|
|
||||||
|
def test_hash_deterministic(self):
|
||||||
|
"""Same config produces same hash."""
|
||||||
|
config = {
|
||||||
|
"packages": ["vim", "git", "base"],
|
||||||
|
"overlays": [{
|
||||||
|
"name": "test",
|
||||||
|
"files": [{"path": "/etc/test", "content": "hello"}]
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
|
||||||
|
hash1 = DeterministicBuildConfig.compute_config_hash(config)
|
||||||
|
hash2 = DeterministicBuildConfig.compute_config_hash(config)
|
||||||
|
|
||||||
|
assert hash1 == hash2
|
||||||
|
|
||||||
|
def test_hash_order_independent(self):
|
||||||
|
"""Package order doesn't affect hash."""
|
||||||
|
config1 = {"packages": ["vim", "git", "base"], "overlays": []}
|
||||||
|
config2 = {"packages": ["base", "git", "vim"], "overlays": []}
|
||||||
|
|
||||||
|
hash1 = DeterministicBuildConfig.compute_config_hash(config1)
|
||||||
|
hash2 = DeterministicBuildConfig.compute_config_hash(config2)
|
||||||
|
|
||||||
|
assert hash1 == hash2
|
||||||
|
|
||||||
|
def test_hash_different_configs(self):
|
||||||
|
"""Different configs produce different hashes."""
|
||||||
|
config1 = {"packages": ["vim"], "overlays": []}
|
||||||
|
config2 = {"packages": ["emacs"], "overlays": []}
|
||||||
|
|
||||||
|
hash1 = DeterministicBuildConfig.compute_config_hash(config1)
|
||||||
|
hash2 = DeterministicBuildConfig.compute_config_hash(config2)
|
||||||
|
|
||||||
|
assert hash1 != hash2
|
||||||
|
|
||||||
|
def test_source_date_epoch_deterministic(self):
|
||||||
|
"""Same hash produces same timestamp."""
|
||||||
|
config_hash = "abc123def456"
|
||||||
|
|
||||||
|
epoch1 = DeterministicBuildConfig.get_source_date_epoch(config_hash)
|
||||||
|
epoch2 = DeterministicBuildConfig.get_source_date_epoch(config_hash)
|
||||||
|
|
||||||
|
assert epoch1 == epoch2
|
||||||
|
|
||||||
|
def test_source_date_epoch_in_range(self):
|
||||||
|
"""Timestamp is within reasonable range."""
|
||||||
|
config_hash = "abc123def456"
|
||||||
|
|
||||||
|
epoch = DeterministicBuildConfig.get_source_date_epoch(config_hash)
|
||||||
|
|
||||||
|
# Should be between 2020 and 2030
|
||||||
|
assert 1577836800 <= epoch <= 1924991999
|
||||||
|
```
|
||||||
|
</action>
|
||||||
|
<verify>
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cd /home/mikkel/repos/debate
|
||||||
|
ruff check backend/app/services/deterministic.py tests/test_deterministic.py
|
||||||
|
pytest tests/test_deterministic.py -v
|
||||||
|
```
|
||||||
|
Expected: Ruff passes, all tests pass.
|
||||||
|
</verify>
|
||||||
|
<done>
|
||||||
|
Deterministic build config generates consistent hashes, timestamps derived from config hash.
|
||||||
|
</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 3: Create build orchestration service</name>
|
||||||
|
<files>
|
||||||
|
backend/app/services/build.py
|
||||||
|
</files>
|
||||||
|
<action>
|
||||||
|
Create backend/app/services/build.py:
|
||||||
|
```python
|
||||||
|
"""
|
||||||
|
Build orchestration service.
|
||||||
|
|
||||||
|
Coordinates:
|
||||||
|
1. Configuration validation
|
||||||
|
2. Hash computation (for caching)
|
||||||
|
3. Sandbox creation
|
||||||
|
4. Build execution
|
||||||
|
5. Result storage
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Optional
|
||||||
|
from uuid import uuid4
|
||||||
|
from datetime import datetime, UTC
|
||||||
|
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
from sqlalchemy import select
|
||||||
|
|
||||||
|
from app.core.config import settings
|
||||||
|
from app.db.models.build import Build, BuildStatus
|
||||||
|
from app.services.sandbox import BuildSandbox
|
||||||
|
from app.services.deterministic import DeterministicBuildConfig
|
||||||
|
|
||||||
|
|
||||||
|
class BuildService:
|
||||||
|
"""Orchestrates ISO build process."""
|
||||||
|
|
||||||
|
def __init__(self, db: AsyncSession):
|
||||||
|
self.db = db
|
||||||
|
self.sandbox = BuildSandbox()
|
||||||
|
self.output_root = Path(settings.iso_output_root)
|
||||||
|
|
||||||
|
async def get_or_create_build(
|
||||||
|
self,
|
||||||
|
config: dict
|
||||||
|
) -> tuple[Build, bool]:
|
||||||
|
"""
|
||||||
|
Get existing build from cache or create new one.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of (Build, is_cached)
|
||||||
|
"""
|
||||||
|
# Compute deterministic hash
|
||||||
|
config_hash = DeterministicBuildConfig.compute_config_hash(config)
|
||||||
|
|
||||||
|
# Check cache
|
||||||
|
stmt = select(Build).where(
|
||||||
|
Build.config_hash == config_hash,
|
||||||
|
Build.status == BuildStatus.completed
|
||||||
|
)
|
||||||
|
result = await self.db.execute(stmt)
|
||||||
|
cached_build = result.scalar_one_or_none()
|
||||||
|
|
||||||
|
if cached_build:
|
||||||
|
# Return cached build
|
||||||
|
return cached_build, True
|
||||||
|
|
||||||
|
# Create new build
|
||||||
|
build = Build(
|
||||||
|
id=uuid4(),
|
||||||
|
config_hash=config_hash,
|
||||||
|
status=BuildStatus.pending
|
||||||
|
)
|
||||||
|
self.db.add(build)
|
||||||
|
await self.db.commit()
|
||||||
|
await self.db.refresh(build)
|
||||||
|
|
||||||
|
return build, False
|
||||||
|
|
||||||
|
async def execute_build(
|
||||||
|
self,
|
||||||
|
build: Build,
|
||||||
|
config: dict
|
||||||
|
) -> Build:
|
||||||
|
"""
|
||||||
|
Execute the actual ISO build.
|
||||||
|
|
||||||
|
Process:
|
||||||
|
1. Update status to building
|
||||||
|
2. Create sandbox container
|
||||||
|
3. Generate archiso profile
|
||||||
|
4. Run build
|
||||||
|
5. Update status with result
|
||||||
|
"""
|
||||||
|
build.status = BuildStatus.building
|
||||||
|
build.started_at = datetime.now(UTC)
|
||||||
|
await self.db.commit()
|
||||||
|
|
||||||
|
container_path = None
|
||||||
|
profile_path = self.output_root / str(build.id) / "profile"
|
||||||
|
output_path = self.output_root / str(build.id) / "output"
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Create sandbox
|
||||||
|
container_path = await self.sandbox.create_build_container(str(build.id))
|
||||||
|
|
||||||
|
# Generate deterministic profile
|
||||||
|
source_date_epoch = DeterministicBuildConfig.get_source_date_epoch(
|
||||||
|
build.config_hash
|
||||||
|
)
|
||||||
|
DeterministicBuildConfig.create_archiso_profile(
|
||||||
|
config, profile_path, source_date_epoch
|
||||||
|
)
|
||||||
|
|
||||||
|
# Run build in sandbox
|
||||||
|
return_code, stdout, stderr = await self.sandbox.run_build(
|
||||||
|
container_path, profile_path, output_path, source_date_epoch
|
||||||
|
)
|
||||||
|
|
||||||
|
if return_code == 0:
|
||||||
|
# Find generated ISO
|
||||||
|
iso_files = list(output_path.glob("*.iso"))
|
||||||
|
if iso_files:
|
||||||
|
build.iso_path = str(iso_files[0])
|
||||||
|
build.status = BuildStatus.completed
|
||||||
|
else:
|
||||||
|
build.status = BuildStatus.failed
|
||||||
|
build.error_message = "Build completed but no ISO found"
|
||||||
|
else:
|
||||||
|
build.status = BuildStatus.failed
|
||||||
|
build.error_message = stderr or f"Build failed with code {return_code}"
|
||||||
|
|
||||||
|
build.build_log = stdout + "\n" + stderr
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
build.status = BuildStatus.failed
|
||||||
|
build.error_message = str(e)
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# Cleanup sandbox
|
||||||
|
if container_path:
|
||||||
|
await self.sandbox.cleanup_container(container_path)
|
||||||
|
|
||||||
|
build.completed_at = datetime.now(UTC)
|
||||||
|
await self.db.commit()
|
||||||
|
await self.db.refresh(build)
|
||||||
|
|
||||||
|
return build
|
||||||
|
|
||||||
|
async def get_build_status(self, build_id: str) -> Optional[Build]:
|
||||||
|
"""Get build by ID."""
|
||||||
|
stmt = select(Build).where(Build.id == build_id)
|
||||||
|
result = await self.db.execute(stmt)
|
||||||
|
return result.scalar_one_or_none()
|
||||||
|
```
|
||||||
|
</action>
|
||||||
|
<verify>
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cd /home/mikkel/repos/debate
|
||||||
|
ruff check backend/app/services/build.py
|
||||||
|
python -c "from backend.app.services.build import BuildService; print('Import OK')"
|
||||||
|
```
|
||||||
|
Expected: No ruff errors, import succeeds.
|
||||||
|
</verify>
|
||||||
|
<done>
|
||||||
|
Build service coordinates hash computation, caching, sandbox execution, and status tracking.
|
||||||
|
</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
</tasks>
|
||||||
|
|
||||||
|
<verification>
|
||||||
|
1. `ruff check backend/app/services/` passes
|
||||||
|
2. `pytest tests/test_deterministic.py` - all tests pass
|
||||||
|
3. Sandbox service can be imported without errors
|
||||||
|
4. Build service can be imported without errors
|
||||||
|
5. DeterministicBuildConfig.compute_config_hash produces consistent results
|
||||||
|
</verification>
|
||||||
|
|
||||||
|
<success_criteria>
|
||||||
|
- Sandbox service creates isolated systemd-nspawn containers (ISO-04)
|
||||||
|
- Builds run with --private-network (no network access)
|
||||||
|
- SOURCE_DATE_EPOCH set for deterministic builds
|
||||||
|
- Same configuration produces identical hash
|
||||||
|
- Build service coordinates full build lifecycle
|
||||||
|
- Cache lookup happens before build execution
|
||||||
|
</success_criteria>
|
||||||
|
|
||||||
|
<output>
|
||||||
|
After completion, create `.planning/phases/01-core-infrastructure-security/01-05-SUMMARY.md`
|
||||||
|
</output>
|
||||||
Loading…
Add table
Reference in a new issue