---
phase: 01-core-infrastructure-security
plan: 04
type: execute
wave: 2
depends_on: ["01-02"]
files_modified:
- Caddyfile
- docker-compose.yml
- scripts/backup-postgres.sh
- scripts/cron/postgres-backup
autonomous: true
must_haves:
truths:
- "HTTPS terminates at Caddy with valid certificate"
- "HTTP requests redirect to HTTPS"
- "Database backup script runs successfully"
- "Backup files are created with timestamps"
artifacts:
- path: "Caddyfile"
provides: "Caddy reverse proxy configuration"
contains: "reverse_proxy"
- path: "scripts/backup-postgres.sh"
provides: "Database backup automation"
contains: "pg_dump"
- path: "docker-compose.yml"
provides: "Caddy container configuration"
contains: "caddy"
key_links:
- from: "Caddyfile"
to: "backend/app/main.py"
via: "reverse_proxy localhost:8000"
pattern: "reverse_proxy.*localhost:8000"
- from: "scripts/backup-postgres.sh"
to: "docker-compose.yml"
via: "debate-postgres container"
pattern: "docker.*exec.*postgres"
---
Configure Caddy for HTTPS termination and set up PostgreSQL daily backup automation.
Purpose: Ensure all traffic is encrypted (INFR-05) and user data is backed up daily (INFR-04).
Output: Caddy reverse proxy with automatic HTTPS, PostgreSQL backup script with 30-day retention.
@/home/mikkel/.claude/get-shit-done/workflows/execute-plan.md
@/home/mikkel/.claude/get-shit-done/templates/summary.md
@.planning/PROJECT.md
@.planning/ROADMAP.md
@.planning/phases/01-core-infrastructure-security/01-RESEARCH.md (Pattern 2: Caddy Automatic HTTPS, Code Examples: PostgreSQL Backup Script)
@.planning/phases/01-core-infrastructure-security/01-CONTEXT.md (Backup & Recovery decisions)
@.planning/phases/01-core-infrastructure-security/01-02-SUMMARY.md
Task 1: Configure Caddy reverse proxy with HTTPS
Caddyfile
docker-compose.yml
Create Caddyfile in project root:
```caddyfile
{
# Admin API for programmatic route management (future use for ISO downloads)
admin localhost:2019
# For local development, use internal CA
# In production, Caddy auto-obtains Let's Encrypt certs
}
# Development configuration (localhost)
:443 {
tls internal # Self-signed for local dev
# Reverse proxy to FastAPI
reverse_proxy localhost:8000 {
health_uri /health
health_interval 10s
health_timeout 5s
}
# Security headers (supplement FastAPI's headers)
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
}
# Access logging
log {
output file /var/log/caddy/access.log {
roll_size 100mb
roll_keep 10
}
format json
}
}
# HTTP to HTTPS redirect
:80 {
redir https://{host}{uri} permanent
}
```
Update docker-compose.yml to add Caddy service:
```yaml
services:
caddy:
image: caddy:2-alpine
container_name: debate-caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "2019:2019" # Admin API (bind to localhost in production)
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
- caddy_logs:/var/log/caddy
network_mode: host # To reach localhost:8000
volumes:
caddy_data:
caddy_config:
caddy_logs:
```
Note: For development, Caddy uses self-signed certs (`tls internal`).
For production, replace `:443` with actual domain and remove `tls internal`.
Run:
```bash
cd /home/mikkel/repos/debate
docker compose up -d caddy
sleep 3
# Test HTTPS (allow self-signed cert)
curl -sk https://localhost/health
# Test HTTP redirect
curl -sI http://localhost | grep -i location
```
Expected: HTTPS returns health response, HTTP redirects to HTTPS.
Caddy running with HTTPS termination, HTTP redirects to HTTPS.
Task 2: Create PostgreSQL backup script with retention
scripts/backup-postgres.sh
scripts/cron/postgres-backup
Create scripts/backup-postgres.sh:
```bash
#!/bin/bash
# PostgreSQL backup script for Debate platform
# Runs daily, keeps 30 days of backups
# Verifies backup integrity after creation
set -euo pipefail
# Configuration
BACKUP_DIR="${BACKUP_DIR:-/var/backups/debate/postgres}"
RETENTION_DAYS="${RETENTION_DAYS:-30}"
CONTAINER_NAME="${CONTAINER_NAME:-debate-postgres}"
DB_NAME="${DB_NAME:-debate}"
DB_USER="${DB_USER:-debate}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/${DB_NAME}_${TIMESTAMP}.dump"
# Logging
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
}
# Create backup directory
mkdir -p "$BACKUP_DIR"
log "Starting backup of database: $DB_NAME"
# Create backup using pg_dump custom format (-Fc)
# Custom format is compressed and allows selective restore
docker exec "$CONTAINER_NAME" pg_dump \
-U "$DB_USER" \
-Fc \
-b \
-v \
"$DB_NAME" > "$BACKUP_FILE" 2>&1
log "Backup created: $BACKUP_FILE"
# Verify backup integrity
log "Verifying backup integrity..."
docker exec -i "$CONTAINER_NAME" pg_restore \
--list "$BACKUP_FILE" > /dev/null 2>&1 || {
log "ERROR: Backup verification failed!"
rm -f "$BACKUP_FILE"
exit 1
}
# Get backup size
BACKUP_SIZE=$(du -h "$BACKUP_FILE" | cut -f1)
log "Backup size: $BACKUP_SIZE"
# Compress if not already (pg_dump -Fc includes compression, but this adds more)
gzip -f "$BACKUP_FILE"
log "Compressed: ${BACKUP_FILE}.gz"
# Clean up old backups
log "Removing backups older than $RETENTION_DAYS days..."
find "$BACKUP_DIR" -name "${DB_NAME}_*.dump.gz" -mtime +$RETENTION_DAYS -delete
REMAINING=$(find "$BACKUP_DIR" -name "${DB_NAME}_*.dump.gz" | wc -l)
log "Remaining backups: $REMAINING"
# Weekly restore test (every Monday)
if [ "$(date +%u)" -eq 1 ]; then
log "Running weekly restore test..."
TEST_DB="${DB_NAME}_backup_test"
# Create test database
docker exec "$CONTAINER_NAME" createdb -U "$DB_USER" "$TEST_DB" 2>/dev/null || true
# Restore to test database
gunzip -c "${BACKUP_FILE}.gz" | docker exec -i "$CONTAINER_NAME" pg_restore \
-U "$DB_USER" \
-d "$TEST_DB" \
--clean \
--if-exists 2>&1 || true
# Drop test database
docker exec "$CONTAINER_NAME" dropdb -U "$DB_USER" "$TEST_DB" 2>/dev/null || true
log "Weekly restore test completed"
fi
log "Backup completed successfully"
```
Make executable: `chmod +x scripts/backup-postgres.sh`
Create scripts/cron/postgres-backup:
```
# PostgreSQL daily backup at 2 AM
0 2 * * * /home/mikkel/repos/debate/scripts/backup-postgres.sh >> /var/log/debate/postgres-backup.log 2>&1
```
Create .gitignore entry for backup files (they shouldn't be in repo).
Run:
```bash
cd /home/mikkel/repos/debate
mkdir -p /tmp/debate-backups
BACKUP_DIR=/tmp/debate-backups ./scripts/backup-postgres.sh
ls -la /tmp/debate-backups/
```
Expected: Backup file created with .dump.gz extension.
Backup script creates compressed PostgreSQL dumps, verifies integrity, maintains 30-day retention.
1. `curl -sk https://localhost/health` returns healthy through Caddy
2. `curl -sI http://localhost | grep -i location` shows HTTPS redirect
3. `./scripts/backup-postgres.sh` creates backup successfully
4. Backup file is compressed and verifiable
5. Old backups (>30 days) would be deleted by retention logic
- All traffic flows through HTTPS via Caddy (INFR-05)
- HTTP requests redirect to HTTPS
- Caddy health checks FastAPI backend
- Daily backup script exists with 30-day retention (INFR-04)
- Backup integrity verified after creation
- Weekly restore test configured