Phase 01: Core Infrastructure & Security - 5 plans in 3 waves - 3 parallel (Wave 1-2), 1 sequential (Wave 3) - Ready for execution Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
8 KiB
| phase | plan | type | wave | depends_on | files_modified | autonomous | must_haves | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 01-core-infrastructure-security | 04 | execute | 2 |
|
|
true |
|
Purpose: Ensure all traffic is encrypted (INFR-05) and user data is backed up daily (INFR-04). Output: Caddy reverse proxy with automatic HTTPS, PostgreSQL backup script with 30-day retention.
<execution_context> @/home/mikkel/.claude/get-shit-done/workflows/execute-plan.md @/home/mikkel/.claude/get-shit-done/templates/summary.md </execution_context>
@.planning/PROJECT.md @.planning/ROADMAP.md @.planning/phases/01-core-infrastructure-security/01-RESEARCH.md (Pattern 2: Caddy Automatic HTTPS, Code Examples: PostgreSQL Backup Script) @.planning/phases/01-core-infrastructure-security/01-CONTEXT.md (Backup & Recovery decisions) @.planning/phases/01-core-infrastructure-security/01-02-SUMMARY.md Task 1: Configure Caddy reverse proxy with HTTPS Caddyfile docker-compose.yml Create Caddyfile in project root: ```caddyfile { # Admin API for programmatic route management (future use for ISO downloads) admin localhost:2019# For local development, use internal CA
# In production, Caddy auto-obtains Let's Encrypt certs
}
Development configuration (localhost)
:443 { tls internal # Self-signed for local dev
# Reverse proxy to FastAPI
reverse_proxy localhost:8000 {
health_uri /health
health_interval 10s
health_timeout 5s
}
# Security headers (supplement FastAPI's headers)
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
}
# Access logging
log {
output file /var/log/caddy/access.log {
roll_size 100mb
roll_keep 10
}
format json
}
}
HTTP to HTTPS redirect
:80 { redir https://{host}{uri} permanent }
Update docker-compose.yml to add Caddy service:
```yaml
services:
caddy:
image: caddy:2-alpine
container_name: debate-caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "2019:2019" # Admin API (bind to localhost in production)
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
- caddy_logs:/var/log/caddy
network_mode: host # To reach localhost:8000
volumes:
caddy_data:
caddy_config:
caddy_logs:
Note: For development, Caddy uses self-signed certs (tls internal).
For production, replace :443 with actual domain and remove tls internal.
Run:
cd /home/mikkel/repos/debate
docker compose up -d caddy
sleep 3
# Test HTTPS (allow self-signed cert)
curl -sk https://localhost/health
# Test HTTP redirect
curl -sI http://localhost | grep -i location
Expected: HTTPS returns health response, HTTP redirects to HTTPS. Caddy running with HTTPS termination, HTTP redirects to HTTPS.
Task 2: Create PostgreSQL backup script with retention scripts/backup-postgres.sh scripts/cron/postgres-backup Create scripts/backup-postgres.sh: ```bash #!/bin/bash # PostgreSQL backup script for Debate platform # Runs daily, keeps 30 days of backups # Verifies backup integrity after creationset -euo pipefail
Configuration
BACKUP_DIR="${BACKUP_DIR:-/var/backups/debate/postgres}" RETENTION_DAYS="${RETENTION_DAYS:-30}" CONTAINER_NAME="${CONTAINER_NAME:-debate-postgres}" DB_NAME="${DB_NAME:-debate}" DB_USER="${DB_USER:-debate}" TIMESTAMP=$(date +%Y%m%d_%H%M%S) BACKUP_FILE="${BACKUP_DIR}/${DB_NAME}_${TIMESTAMP}.dump"
Logging
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" }
Create backup directory
mkdir -p "$BACKUP_DIR"
log "Starting backup of database: $DB_NAME"
Create backup using pg_dump custom format (-Fc)
Custom format is compressed and allows selective restore
docker exec "$CONTAINER_NAME" pg_dump
-U "$DB_USER"
-Fc
-b
-v
"$DB_NAME" > "$BACKUP_FILE" 2>&1
log "Backup created: $BACKUP_FILE"
Verify backup integrity
log "Verifying backup integrity..."
docker exec -i "$CONTAINER_NAME" pg_restore
--list "$BACKUP_FILE" > /dev/null 2>&1 || {
log "ERROR: Backup verification failed!"
rm -f "$BACKUP_FILE"
exit 1
}
Get backup size
BACKUP_SIZE=$(du -h "$BACKUP_FILE" | cut -f1) log "Backup size: $BACKUP_SIZE"
Compress if not already (pg_dump -Fc includes compression, but this adds more)
gzip -f "$BACKUP_FILE" log "Compressed: ${BACKUP_FILE}.gz"
Clean up old backups
log "Removing backups older than $RETENTION_DAYS days..." find "$BACKUP_DIR" -name "${DB_NAME}*.dump.gz" -mtime +$RETENTION_DAYS -delete REMAINING=$(find "$BACKUP_DIR" -name "${DB_NAME}*.dump.gz" | wc -l) log "Remaining backups: $REMAINING"
Weekly restore test (every Monday)
if [ "$(date +%u)" -eq 1 ]; then log "Running weekly restore test..." TEST_DB="${DB_NAME}_backup_test"
# Create test database
docker exec "$CONTAINER_NAME" createdb -U "$DB_USER" "$TEST_DB" 2>/dev/null || true
# Restore to test database
gunzip -c "${BACKUP_FILE}.gz" | docker exec -i "$CONTAINER_NAME" pg_restore \
-U "$DB_USER" \
-d "$TEST_DB" \
--clean \
--if-exists 2>&1 || true
# Drop test database
docker exec "$CONTAINER_NAME" dropdb -U "$DB_USER" "$TEST_DB" 2>/dev/null || true
log "Weekly restore test completed"
fi
log "Backup completed successfully"
Make executable: `chmod +x scripts/backup-postgres.sh`
Create scripts/cron/postgres-backup:
PostgreSQL daily backup at 2 AM
0 2 * * * /home/mikkel/repos/debate/scripts/backup-postgres.sh >> /var/log/debate/postgres-backup.log 2>&1
Create .gitignore entry for backup files (they shouldn't be in repo).
</action>
<verify>
Run:
```bash
cd /home/mikkel/repos/debate
mkdir -p /tmp/debate-backups
BACKUP_DIR=/tmp/debate-backups ./scripts/backup-postgres.sh
ls -la /tmp/debate-backups/
Expected: Backup file created with .dump.gz extension. Backup script creates compressed PostgreSQL dumps, verifies integrity, maintains 30-day retention.
1. `curl -sk https://localhost/health` returns healthy through Caddy 2. `curl -sI http://localhost | grep -i location` shows HTTPS redirect 3. `./scripts/backup-postgres.sh` creates backup successfully 4. Backup file is compressed and verifiable 5. Old backups (>30 days) would be deleted by retention logic<success_criteria>
- All traffic flows through HTTPS via Caddy (INFR-05)
- HTTP requests redirect to HTTPS
- Caddy health checks FastAPI backend
- Daily backup script exists with 30-day retention (INFR-04)
- Backup integrity verified after creation
- Weekly restore test configured </success_criteria>