felt/docs/felt_phase1_spec_v05.md

2106 lines
94 KiB
Markdown
Raw Permalink Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Project Felt — Phase 1 Product Specification
## Live Tournament Management System
**Working Title:** Felt
**Version:** 0.5 Draft
**Date:** 2026-02-27
---
## Table of Contents
1. [Vision & Strategy](#1-vision--strategy)
2. [System Architecture](#2-system-architecture)
3. [Hardware Architecture](#3-hardware-architecture)
4. [Core Infrastructure](#4-core-infrastructure)
5. [Data Architecture](#5-data-architecture)
6. [Sync & Message Queue Architecture](#6-sync--message-queue-architecture)
7. [Feature Specification](#7-feature-specification)
8. [UI/UX Design System](#8-uiux-design-system)
9. [Tech Stack](#9-tech-stack)
10. [Display Node Protocol](#10-display-node-protocol)
11. [API Design](#11-api-design)
12. [Security & Threat Model](#12-security--threat-model)
13. [Deployment & Operations](#13-deployment--operations)
14. [Roadmap](#14-roadmap)
15. [Appendix: TDD Feature Parity Matrix](#15-appendix-tdd-feature-parity-matrix)
---
## 1. Vision & Strategy
### The Problem
Live poker venue management is fragmented across aging, siloed tools:
- **The Tournament Director (TDD):** Feature-rich but Windows-only desktop software from 2002. Requires HDMI cabling for displays, no mobile access, no cloud, no multi-device operation. Powerful under the hood but dated and intimidating to configure.
- **Blind Valet:** Modern cloud-based timer, but feature-shallow. Dead without internet.
- **Poker Atlas:** Venue discovery and schedule publishing, but zero operational tooling.
- **Spreadsheets & Paper:** Most venues still track cash games, waitlists, comps, and league points manually.
No single system handles the full lifecycle of a poker venue while working reliably when the internet goes down.
### The Vision
**Felt** is an all-in-one poker venue operating system built on a resilient edge-cloud architecture. An ARM64 SBC "Leaf Node" runs the venue autonomously with NVMe storage. Ultra-cheap display nodes replace HDMI cables. Players interact via their phones — on venue WiFi or remotely via the cloud. The cloud layer handles cross-venue leagues, player profiles, public scheduling, and remote player access.
### Strategic Phasing
| Phase | Scope | Replaces |
|-------|-------|----------|
| **Phase 1** | Live Tournament Management | TDD, Blind Valet |
| **Phase 2** | Cash Game Management | Manual waitlists, spreadsheets |
| **Phase 3** | Full Venue System | Poker Atlas (operational side), comp tracking, analytics |
| **Phase 4** | Native Apps & Platform Maturity | PWA limitations, App Store presence, social/gamification |
### Business Model & Exit Strategy
Felt is designed for long-term value creation, not short-term revenue optimization.
**Free tier:** Full tournament engine running on a virtual Leaf node in our cloud. Includes player mobile access, signage editor, leagues, seasons, and financial tracking. Requires internet — no offline capability. Costs us ~€0.45/mo per venue in infrastructure. This is our customer acquisition engine.
**Offline tier (€25/mo):** Dedicated Leaf hardware + wireless display nodes + full offline operation + custom domain + remote admin access via Netbird. Leaf hardware purchased separately (~€120) or free with 12-month annual plan.
**Pro tier (€100/mo = €25 offline + €75 features):** Everything in Offline plus cash game management, dealer scheduling, loyalty system, memberships, advanced analytics, TDD data import, and priority support. All Phase 2 and Phase 3 features included.
**Casino tiers:** Casino Starter (€249/mo, independent casino 5-15 tables), Casino Pro (€499/mo per property, small chain 15-40 tables), Casino Enterprise (€999+/mo custom, large operators 40+ tables). Progressive features for multi-room, cross-property tracking, CMS integration, SLA.
**Hardware model:** Leaf nodes and display nodes sold exclusively by Felt. Pre-configured, encrypted storage, secure boot. No BYO. Display nodes priced at cost + shipping with no recurring fee.
**Exit strategy:** Acquisition by casino management platforms (Bally's, Light & Wonder, IGT) who need modern poker room software, by online poker operators (PokerStars, GGPoker, WPT) expanding into live venues, or growth to passive cash cow with deep venue lock-in through data gravity.
**Architectural implications for exit:**
- Clean API boundaries (acquirer can integrate Felt's backend into their ecosystem)
- Multi-tenant from day one (acquirer gets all venues in one platform)
- Data portability (full export capability — builds trust, reduces churn)
- Open-source friendly licensing (Apache 2.0 on client-side, proprietary on Core — standard dual-license SaaS model)
- Security posture that survives due diligence (no shortcuts, no "we'll fix it later")
### Design Philosophy
**"TDD's brain, Linear's face."**
Felt must be as powerful and logically organized as TDD — the tab structure (Game, Rounds, Players, Tables, Prizes, Events) is genuinely well thought out and tournament directors understand it. But it must look and feel like a modern premium product. Clean typography, generous whitespace, information density without clutter, smooth animations, and a color palette that works in a dim poker room.
The operator should feel like they're running a professional operation. The players should feel like they're at a serious venue. The venue owner should feel like they have a competitive edge.
---
## 2. System Architecture
### Three-Tier Edge-Cloud Model
```
+--------------------------------------------------------------------+
| CORE (Cloud / Hetzner PVE) |
| +------------+ +--------+ +----------+ +----------+ |
| | PostgreSQL | | NATS | | Go API | | Authentik| |
| | (master of | | (cloud | | Service | | (IdP) | |
| | record) | | hub) | | (admin) | | | |
| +------------+ +---+----+ +----------+ +----------+ |
| | |
| +------------------+-------------------------------------------+ |
| | Netbird (unified server + reverse proxy) | |
| | | |
| | Mesh: Leaf <-> Core, Leaf <-> Display, Admin <-> any | |
| | Proxy: *.felt.io -> Leaf services via WireGuard tunnel | |
| | DNS: felt.internal zone (internal service discovery) | |
| | SSH: Identity-aware via Authentik OIDC | |
| | Auth: SSO / PIN / password at proxy layer per service | |
| +------------------+-------------------------------------------+ |
+---------------------+----------------------------------------------+
|
WireGuard mesh (encrypted, zero-trust, lazy connections)
NATS sync (async, queued locally on Leaf when offline)
Reverse proxy (HTTPS -> WireGuard tunnel -> Leaf HTTP/WS)
|
+-------------+---------------------------+
| LEAF NODE (SBC) |
| |
| +----------+ +----------+ +----------+ |
| | Go | | LibSQL | | NATS | |
| | Backend | | (NVMe) | | (embed) | |
| +----+-----+ +----------+ +----------+ |
| | WebSocket Hub |
+-------+--+-----+-------+----------------+
| | | |
+----+ +---+ +----+ +----------------------------+
| | | |
+---------+ +---------+ +--------+ +---------------------+
|Display | |Display | |Operator| |Player Phones |
|Node 1 | |Node 2 | |Tablet | | |
|(Clock) | |(Rank) | |(Touch) | | On venue WiFi: |
+---------+ +---------+ +--------+ | -> direct to Leaf |
| Off venue (4G/home): |
| -> Netbird proxy |
| -> WG -> Leaf |
| (same URL, same data,|
| no VPN, no app) |
+----------------------+
```
### Netbird as Infrastructure Backbone
Netbird (v0.65+) is the foundational layer that makes Felt network-agnostic. Every Felt device communicates exclusively through encrypted WireGuard tunnels — the Leaf, display nodes, Core services, admin devices — all traffic rides on the Netbird mesh regardless of what underlying network it sits on.
**What this means for venues:** No firewall rules, no port forwarding, no static IPs, no VLANs, no IT department involvement. The Leaf and display nodes need one thing: outbound internet access. Netbird handles NAT traversal, encryption, and routing automatically. A venue can run Felt on a consumer-grade WiFi router with zero configuration. The traffic is invisible to anyone sniffing the local network — every byte is WireGuard-encrypted, even between the Leaf and display nodes sitting on the same LAN.
**What this means for us:** One networking model regardless of deployment. A venue in a Copenhagen bar, a hotel ballroom in Las Vegas, and a community center in rural Denmark all work identically. No venue-specific networking troubleshooting, no "works on my network" bugs, no customer support calls about firewalls.
Rather than building separate solutions for networking, service exposure, DNS, SSH access, and proxy authentication, Netbird provides all of these in a single self-hosted platform:
| Capability | Replaces | Benefit |
|-----------|----------|---------|
| **WireGuard mesh** | Manual WireGuard, Tailscale, venue network dependency | All Felt traffic encrypted, NAT-traversed, network-agnostic. No ports, no firewall rules, no sniffing. |
| **Built-in reverse proxy** | Custom Core relay system, nginx public proxy | Expose Leaf services to internet with SSO/PIN auth, automatic TLS. Players access from anywhere. |
| **Custom DNS zones** | Hardcoded IPs in config files | `felt.internal` zone for service discovery, auto-distributed to peers |
| **Identity-aware SSH** | SSH key distribution, manual access management | OIDC auth via Authentik, per-user OS mapping, browser-based SSH |
| **Lazy connections** | Always-on tunnels to hundreds of Leaves | On-demand WireGuard tunnels, critical at 500+ venues |
| **Firewall policies** | Manual nftables per device | Drop-all-inbound on display nodes, zero-trust ACLs |
| **Auto-updates** | Manual Netbird agent updates across fleet | Fleet-wide agent updates from management dashboard |
| **Traffic logging** | Custom monitoring for mesh traffic | Real-time connection logging for security audit |
### Key Principles
1. **Leaf is sovereign.** All tournament logic runs locally. Cloud is never required for operation.
2. **Network agnostic.** Every device — Leaf, display, admin — communicates exclusively through encrypted WireGuard tunnels (Netbird). We don't care about the venue's network topology, firewall rules, NAT type, ISP, or router. All we need is outbound internet access. No ports to open, no static IPs, no IT support from the venue. Plug in, connect to any internet, done. All traffic is encrypted end-to-end regardless of what network it traverses — coffee shop WiFi, hotel LAN, cellular hotspot, it doesn't matter.
3. **Everything is a browser.** Operator, displays, players — all connect via HTTP/WebSocket.
4. **Real-time first.** State changes propagate to all clients within 100ms via WebSocket.
5. **Display nodes are dumb.** They render what the Leaf tells them. The Leaf owns routing.
6. **Data flows up via messages.** NATS JetStream queues events locally and forwards to Core when online.
7. **Core is the master of record.** Once synced, PostgreSQL is the canonical historical data store.
8. **Beautiful by default.** Every screen — operator, display, player — should look premium out of the box.
9. **Players connect from anywhere.** On venue WiFi → direct to Leaf. Off-venue → Netbird reverse proxy tunnels directly to Leaf. Same URL, same data, no VPN client, no relay service.
10. **Netbird is the infrastructure layer.** Mesh networking, service exposure, DNS, SSH, auth, traffic logging — one self-hosted platform, fully under our control. No Cloudflare, no third-party MITM, no vendor dependency for core networking.
---
## 3. Hardware Architecture
### Leaf Node — Requirements
The Leaf Node is the venue brain. It must be an ARM64 SBC with native M.2 NVMe, sufficient RAM for Go + LibSQL + NATS + serving 50+ WebSocket clients, and reliable EU/DK sourcing.
**Minimum Requirements:**
| Component | Requirement | Notes |
|-----------|-------------|-------|
| **CPU** | ARM64, quad-core A76 or better | Must handle Go concurrency + WebSocket hub |
| **RAM** | 4GB minimum, 8GB recommended | Go + NATS + LibSQL + OS |
| **Storage** | M.2 NVMe (built-in, no hat) | Write durability critical for tournament data |
| **Network** | Gigabit Ethernet + WiFi 5/6 | Ethernet primary, WiFi fallback |
| **HDMI** | At least 1× (for initial setup) | Not used in production |
| **Audio** | 3.5mm or USB audio out | Level change sounds, announcements |
| **Power** | USB-C with UPS battery backup | Must survive brief power blips |
| **EU Sourcing** | Available from EU distributors | Amazon.de, Allnet.de, EU warehouses |
**Reference Board: Orange Pi 5 Plus (~€90-110)**
| Spec | Value |
|------|-------|
| SoC | Rockchip RK3588 (4× A76 @ 2.4GHz + 4× A55) |
| RAM | 8GB / 16GB / 32GB LPDDR5 |
| Storage | M.2 2280 NVMe via PCIe 3.0 ×4 (up to 3500 MB/s) + eMMC |
| Network | Dual 2.5GbE + WiFi 6 + BT 5.0 |
| Video | Dual HDMI (8K + 4K) |
| Power | USB-C, ~3W idle |
| OS | Armbian / Ubuntu / Debian (well-supported) |
**Alternative Board: Radxa Rock 5B+ (~€85-100)**
| Spec | Value |
|------|-------|
| SoC | RK3588 (same as Orange Pi 5 Plus) |
| RAM | Up to 32GB LPDDR5 |
| Storage | 3× M.2 slots (2× NVMe + 1× cellular modem) |
| Network | 2.5GbE + WiFi 6 |
| Bonus | Cellular modem M.2 slot (future: 4G fallback connectivity) |
**Fallback Board: Raspberry Pi 5 (4GB/8GB)**
Acceptable but not recommended. Requires M.2 HAT for NVMe, PCIe 2.0 ×1 only (~400 MB/s), single GbE. Use only if other boards unavailable.
**Enclosure:** Compact aluminum case with passive cooling, mounted in equipment closet or behind a display.
### Display Node
| Component | Spec | Notes |
|-----------|------|-------|
| **Board** | Raspberry Pi Zero 2 W (~€15) | ARM64, 512MB RAM, WiFi, mini-HDMI |
| **Output** | Mini-HDMI to TV/monitor | Plugs directly into any display |
| **Network** | WiFi to venue network | Connects to Leaf via LAN / Netbird |
| **Software** | Minimal Linux + Chromium kiosk | Boot → fullscreen browser → Leaf URL |
| **Power** | USB from TV or wall adapter | Most TVs can power a Pi Zero |
| **Cost** | €20-30 all-in per unit | Replaces €50+ HDMI runs |
No better alternative exists for display nodes at this price point. The Pi Zero 2 W is the reference and recommended board.
### Display Node Boot Sequence
```
1. Power on → Linux boots (< 15s target)
2. WiFi connects (pre-configured or AP setup mode)
3. Netbird mesh establishes (if cross-network)
4. Chromium kiosk → http://leaf.local/display/{node-id}
5. WebSocket handshake → Leaf assigns content view
6. Render loop: state updates → re-render
7. Heartbeat every 5s → Leaf tracks status
```
### Display Node Discovery
New display nodes auto-appear as "Unassigned" in the operator's Display Management panel. The operator names it, assigns it to a view, and the assignment persists across reboots.
---
## 4. Core Infrastructure
### OS Layer: Proxmox VE
The Core runs on Proxmox VE on Hetzner dedicated servers. PVE provides LXC containers for lightweight services and KVM VMs for stateful workloads requiring kernel isolation, with a clear path from single-server dev to multi-node production cluster.
**Why PVE over raw Linux + Docker/K8s:**
- Kubernetes on 1-2 servers is pure overhead — K8s shines at 5+ nodes, and we won't need that until 200+ venues
- Docker Compose on bare metal loses live migration, snapshots, granular resource control
- PVE gives container-level isolation, web management, API automation, and PBS backup integration for free
- Familiar operational model (existing homelab expertise)
- Horizontal scaling by adding nodes to the PVE cluster — zero re-architecture
### Scaling Phases
```
┌─────────────────────────────────────────────────────────────────┐
│ DEV PHASE (now) │
│ 1× Hetzner Dedicated (existing AX41/AX52) │
│ │
│ PVE Host │
│ ├── LXC: felt-core-api (Go backend, 2 vCPU, 2GB) │
│ ├── LXC: felt-nats (NATS + JetStream, 1 vCPU, 1GB) │
│ ├── LXC: felt-authentik (Docker-in-LXC, 2 vCPU, 2GB) │
│ ├── LXC: felt-netbird (unified server + reverse proxy + │
│ │ Traefik, 2 vCPU, 1GB) │
│ └── VM: felt-postgres (PostgreSQL 16, 2 vCPU, 4GB) │
│ │
│ Total: ~9 vCPU, ~11GB RAM — fits comfortably on AX41 │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ GROWTH PHASE (10-50 venues) │
│ 2× Hetzner Dedicated (AX52 or EX44) │
│ PVE Cluster (Hetzner vSwitch for private network) │
│ │
│ Node 1 (Application) │
│ ├── LXC: felt-core-api ×2 (load balanced) │
│ ├── LXC: felt-nats-1 (NATS cluster node 1) │
│ ├── LXC: felt-authentik │
│ └── LXC: felt-netbird (unified + proxy + Traefik) │
│ │
│ Node 2 (Data) │
│ ├── VM: felt-postgres-primary │
│ ├── VM: felt-postgres-replica (streaming replication) │
│ ├── LXC: felt-nats-2 (NATS cluster node 2) │
│ └── LXC: felt-nats-3 (NATS cluster node 3) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ SCALE PHASE (50-500 venues) │
│ 3-5× Hetzner AX/EX servers │
│ PVE Cluster + Ceph distributed storage │
│ │
│ Nodes 1-2: API tier (4-8× felt-core-api behind LB) │
│ Node 3: NATS cluster (3-node for HA + JetStream replication) │
│ Nodes 4-5: PostgreSQL (primary + sync replica + async replica) │
│ All nodes: Ceph OSD for distributed storage │
│ │
│ Key: API is stateless → scale by adding LXCs behind Netbird LB │
│ Netbird proxy instances cluster automatically (same domain=HA) │
│ NATS JetStream → 3-node cluster handles thousands of streams │
│ PostgreSQL → read replicas for query scaling │
└─────────────────────────────────────────────────────────────────┘
```
### Design Rules for Horizontal Scaling
1. **Core API is stateless.** All state lives in PostgreSQL and NATS. Any API instance can handle any request. Scale by running more instances behind Netbird's reverse proxy / load balancer.
2. **NATS is the backbone.** All inter-service communication goes through NATS subjects. Services don't know or care which server they're running on.
3. **PostgreSQL is the only shared state.** One primary, streaming replicas for reads. Connection pooling via PgBouncer when needed.
4. **No service has local state.** LXC containers can be live-migrated, destroyed, recreated. Nothing is lost because state is always in PostgreSQL or NATS JetStream.
5. **Configuration as code.** Every LXC/VM is reproducible from Ansible playbooks or PVE API calls. No snowflake servers.
### Backup Architecture
```
┌────────────────────────────────────────────────────────┐
│ Hetzner PVE Cluster │
│ │
│ All LXCs + VMs │
│ │ │
│ ├─→ PBS Tier 1 (on-site / same DC) │
│ │ • Hourly snapshots (retain 24h) │
│ │ • Daily backups (retain 30 days) │
│ │ • Fast restore: < 5 min for any LXC/VM │
│ │ │
│ └─→ PBS Tier 2 (off-site / home lab) │
│ • Daily sync from Tier 1 (PBS replication) │
│ • Weekly full backups (retain 90 days) │
│ • Connected via Netbird mesh (encrypted, no ports) │
│ • Dedicated 1Gbit fiber (flat rate) │
│ • Geographic separation = disaster recovery │
│ │
│ PostgreSQL additionally: │
│ ├─→ Continuous WAL archiving to S3/Hetzner Object │
│ └─→ Point-in-time recovery (PITR) to any second │
└────────────────────────────────────────────────────────┘
PBS Off-Site (Home Lab):
├── Dedicated Proxmox Backup Server
├── ZFS pool (mirrored, scrubbed weekly)
├── Retention: 90d daily, 12mo monthly
├── Encrypted at rest (PBS encryption)
├── Netbird peer in 'backup-targets' group
└── Accessible only from PVE cluster (Netbird ACL)
```
**Recovery scenarios:**
| Scenario | Recovery Method | RTO |
|----------|----------------|-----|
| LXC corruption | Restore from PBS Tier 1 snapshot | < 5 min |
| Full server failure | Failover to second node, restore from PBS | < 30 min |
| Datacenter outage | Rebuild from PBS Tier 2 (off-site) | < 2 hours |
| Database corruption | PostgreSQL PITR from WAL archive | < 15 min |
| Ransomware/compromise | PBS Tier 2 is air-gapped (pull-only via Netbird) | < 2 hours |
| Leaf SBC failure | New SBC + Felt OS + restore data from Core | < 30 min |
---
## 5. Data Architecture
### Leaf: LibSQL (Embedded)
LibSQL provides SQLite compatibility with built-in replication support. The Leaf's database is the live operational store every tournament action hits this DB first.
**Why LibSQL over plain SQLite:**
- SQLite-compatible (same SQL, same tooling)
- Built-in WAL-based replication (can stream to remote)
- Supports embedded mode (no server process needed)
- Better concurrent write handling than vanilla SQLite
- Maintained actively by the Turso team
- Can run as embedded library within the Go binary
### Core: PostgreSQL
PostgreSQL is the canonical master of record. It stores:
- Complete historical data from all venues
- Player profiles (cross-venue)
- League standings (cross-venue)
- Public-facing tournament schedules and results
- Analytics and reporting data
### Data Flow
```
Tournament action occurs (e.g., player busts out)
├─→ Written to LibSQL immediately (local, fast, reliable)
├─→ Broadcast to all WebSocket clients (real-time display update)
└─→ Published to local NATS JetStream (queued for Core sync)
└─→ When online: NATS forwards to Core
└─→ Core ingests into PostgreSQL
```
### Core Schema (PostgreSQL)
```sql
-- Venues (one per Leaf node)
venues (
id UUID PRIMARY KEY,
name TEXT NOT NULL,
slug TEXT UNIQUE,
timezone TEXT DEFAULT 'UTC',
branding JSON, -- logo_url, colors, theme
config JSON, -- default settings
leaf_node_id TEXT UNIQUE, -- hardware ID of the Leaf
last_sync_at TIMESTAMPTZ,
created_at TIMESTAMPTZ DEFAULT NOW()
)
-- Players (cross-venue, canonical profiles)
players (
id UUID PRIMARY KEY,
first_name TEXT,
last_name TEXT,
nickname TEXT,
email TEXT,
phone TEXT,
photo_url TEXT,
notes TEXT, -- private to venue staff
custom_fields JSONB,
home_venue_id UUID REFERENCES venues(id),
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW()
)
-- Tournaments (synced from Leaf after completion or periodically)
tournaments (
id UUID PRIMARY KEY,
venue_id UUID REFERENCES venues(id),
name TEXT NOT NULL,
description TEXT,
league_id UUID REFERENCES leagues(id),
season_id UUID REFERENCES seasons(id),
status TEXT CHECK(status IN ('setup','running','paused','completed','cancelled')),
config JSONB, -- full tournament configuration snapshot
blind_structure JSONB,
started_at TIMESTAMPTZ,
ended_at TIMESTAMPTZ,
final_results JSONB, -- denormalized final standings
synced_from_leaf_at TIMESTAMPTZ,
created_at TIMESTAMPTZ DEFAULT NOW()
)
-- Tournament Entries (each player's participation in a tournament)
tournament_entries (
id UUID PRIMARY KEY,
tournament_id UUID REFERENCES tournaments(id),
player_id UUID REFERENCES players(id),
status TEXT CHECK(status IN ('registered','active','busted','winner')),
finish_position INTEGER,
buy_in_amount NUMERIC(10,2),
total_cost NUMERIC(10,2), -- buy-in + rebuys + addons + bounty
rebuys INTEGER DEFAULT 0,
addons INTEGER DEFAULT 0,
bounties_collected INTEGER DEFAULT 0,
busted_by UUID REFERENCES players(id),
prize_won NUMERIC(10,2) DEFAULT 0,
bounty_won NUMERIC(10,2) DEFAULT 0,
points_earned NUMERIC(10,4) DEFAULT 0,
playing_time_seconds INTEGER,
bought_in_at TIMESTAMPTZ,
busted_at TIMESTAMPTZ,
UNIQUE(tournament_id, player_id)
)
-- Transactions (full financial audit trail)
transactions (
id UUID PRIMARY KEY,
tournament_id UUID REFERENCES tournaments(id),
player_id UUID REFERENCES players(id),
type TEXT CHECK(type IN ('buy_in','rebuy','addon','bounty_collect','prize','refund')),
amount NUMERIC(10,2),
chips INTEGER,
rake NUMERIC(10,2),
receipt_number INTEGER,
operator_id UUID,
voided BOOLEAN DEFAULT FALSE,
voided_at TIMESTAMPTZ,
voided_by UUID,
created_at TIMESTAMPTZ DEFAULT NOW()
)
-- Action History (event log — the queue replay)
action_history (
id UUID PRIMARY KEY,
venue_id UUID REFERENCES venues(id),
tournament_id UUID REFERENCES tournaments(id),
player_id UUID,
action TEXT NOT NULL,
details JSONB,
level INTEGER,
clock_time_ms BIGINT,
operator_id UUID,
created_at TIMESTAMPTZ DEFAULT NOW()
)
-- Leagues
leagues (
id UUID PRIMARY KEY,
venue_id UUID, -- NULL = cross-venue league
name TEXT NOT NULL,
points_formula TEXT,
scoring_config JSONB,
created_at TIMESTAMPTZ DEFAULT NOW()
)
-- Seasons
seasons (
id UUID PRIMARY KEY,
league_id UUID REFERENCES leagues(id),
name TEXT NOT NULL,
start_date DATE,
end_date DATE,
status TEXT CHECK(status IN ('active','completed','archived'))
)
-- Season Standings (materialized/cached)
season_standings (
season_id UUID REFERENCES seasons(id),
player_id UUID REFERENCES players(id),
total_points NUMERIC(10,4),
tournaments_played INTEGER,
best_finish INTEGER,
total_winnings NUMERIC(10,2),
avg_finish NUMERIC(5,2),
bounties_total INTEGER,
last_updated TIMESTAMPTZ,
PRIMARY KEY(season_id, player_id)
)
```
### Leaf Schema (LibSQL)
The Leaf schema mirrors the Core schema structurally but adds operational fields:
```sql
-- Same tables as Core, plus:
-- Display Nodes (Leaf-only, not synced to Core)
display_nodes (
id TEXT PRIMARY KEY, -- hardware-derived ID
name TEXT,
group_name TEXT,
resolution_w INTEGER,
resolution_h INTEGER,
assigned_view TEXT,
assigned_tournament_id TEXT,
screen_cycle JSON,
last_heartbeat_at TEXT, -- ISO datetime
config JSON
)
-- Event Rules (can be global or per-tournament)
event_rules (
id TEXT PRIMARY KEY,
tournament_id TEXT, -- NULL = global defaults
trigger TEXT NOT NULL,
conditions JSON,
actions JSON,
enabled INTEGER DEFAULT 1,
sort_order INTEGER
)
-- Sync Queue Metadata (tracks what's been synced)
sync_state (
entity_type TEXT,
entity_id TEXT,
last_synced_at TEXT,
sync_version INTEGER,
PRIMARY KEY(entity_type, entity_id)
)
-- Clock state stored in tournament record as JSON:
-- {
-- "remaining_ms": 847000,
-- "is_paused": true,
-- "paused_at": "2026-02-27T21:30:00Z",
-- "level_started_at": "2026-02-27T21:15:00Z",
-- "total_elapsed_ms": 5400000
-- }
```
---
## 6. Sync & Message Queue Architecture
### Why NATS JetStream
| Requirement | NATS JetStream |
|-------------|---------------|
| Runs on Pi | Single Go binary, ~10MB RAM |
| Persistent queue | JetStream stores messages to disk |
| Survives offline | Queues locally, forwards when connected |
| At-least-once delivery | Ack-based consumer model |
| Ordered replay | Stream sequences are ordered |
| Lightweight | No JVM, no Erlang runtime |
### Message Flow
```
┌─────────────────────────────────────────────────┐
│ LEAF NODE │
│ │
│ Go Backend │
│ │ │
│ ├─→ LibSQL write (immediate) │
│ ├─→ WebSocket broadcast (immediate) │
│ └─→ NATS Publish to local stream (immediate) │
│ │ │
│ ▼ │
│ NATS JetStream (embedded or sidecar) │
│ Stream: "venue.{venue_id}.events" │
│ │ │
│ └─→ When online: NATS leaf connects to │
│ Core NATS cluster and replicates stream │
└──────────────────────────────────────────────────┘
│ NATS leaf-to-hub connection
│ (auto-reconnect, TLS)
┌──────────────────────────────────────────────────┐
│ CORE │
│ │
│ NATS Server (hub) │
│ │ │
│ └─→ Consumer: sync-worker │
│ │ │
│ └─→ Process message → Write to PostgreSQL │
│ (idempotent upserts keyed on event ID) │
└───────────────────────────────────────────────────┘
```
### Message Schema
Every action on the Leaf produces a message:
```json
{
"id": "evt_01HQ3J5K7M...",
"venue_id": "venue_copenhagen_01",
"tournament_id": "tourn_friday_50",
"type": "player.busted",
"timestamp": "2026-02-27T21:32:15.847Z",
"sequence": 1247,
"data": {
"player_id": "plr_mikkel",
"busted_by": "plr_thomas",
"finish_position": 12,
"bounty_transferred": true,
"level": 8,
"clock_ms": 423000
},
"operator_id": "op_floor_jane"
}
```
### NATS Subjects (Topics)
```
venue.{id}.tournament.{id}.action -- tournament actions (bust, rebuy, etc.)
venue.{id}.tournament.{id}.clock -- clock state changes (start, pause, advance)
venue.{id}.tournament.{id}.financial -- transactions (buy-in, prize, refund)
venue.{id}.player.updated -- player profile changes
venue.{id}.display.status -- display node health
venue.{id}.system.health -- leaf node health metrics
```
### Sync Guarantees
| Scenario | Behavior |
|----------|----------|
| Internet down during tournament | Events queue in NATS JetStream on local disk. Tournament unaffected. |
| Internet restored | NATS leaf auto-reconnects to hub, replays queued messages in order. |
| Leaf reboots | NATS JetStream recovers from disk. Un-acked messages re-delivered. |
| Core receives duplicate | Idempotent upsert using event `id`. Safe to replay. |
| Core is down | Leaf doesn't care. NATS leaf can't connect, messages stay queued locally. |
| SD card fails | Restore LibSQL from Core PostgreSQL (reverse sync for disaster recovery). |
### Reverse Sync (Core → Leaf)
Limited and controlled. The Core can push to the Leaf:
- Updated player profiles (edited in cloud admin)
- League configuration changes
- New player registrations (from online signup)
- Venue branding/config changes
These arrive via a separate NATS subject: `core.venue.{id}.sync` and are processed by the Leaf's sync consumer.
**Rule: During a running tournament, Core never overrides Leaf data for that tournament.**
---
## 7. Feature Specification
### 6.1 Tournament Engine
#### 6.1.1 Tournament Clock
**Core:**
- Countdown timer per level (second-level granularity, millisecond-precision internally)
- Separate break durations
- Pause / resume with visual indicator across all displays
- Manual advance forward / backward
- Jump to any level by number
- Total elapsed time
- Time remaining in current level
**Alerts:**
- Warning thresholds: configurable (e.g., 60s, 30s, 10s)
- Audio alerts: level change, break start/end, warnings (custom sound files)
- Visual: screen flash/pulse, color transitions per level
**Sync:**
- Clock state authoritative on the Leaf
- Clients receive ticks via WebSocket (1/sec normal, 10/sec in final 10 seconds)
- Clients interpolate locally for smooth display
- Reconnecting clients receive full clock state immediately
#### 6.1.2 Blind Structure
**Schedule Builder:**
- Unlimited levels, each configurable:
- Type: Round or Break
- Game Type: No-Limit / Pot-Limit / Limit
- Game Name: freeform with presets (Hold'em, Omaha, Stud, Razz, HORSE, 8-Game...)
- Small Blind / Big Blind
- Ante (standard or Big Blind Ante)
- Up to 4 additional limit fields
- Duration (minutes)
- Chip-Up designation
- Notes (shown to players)
**Structure Wizard:**
- Inputs: player count, starting chips, desired duration, chip denominations
- Output: suggested structure with level durations and chip-up timing
- Accounts for stack-to-blind ratio curve, break frequency
**Templates:**
- Save/load as reusable templates
- Built-in: Turbo (~2hr), Standard (~3-4hr), Deep Stack (~5-6hr), WSOP-style
- Mixed game rotation support (HORSE, 8-Game round definitions)
#### 6.1.3 Chip Management
- Define denominations with colors (hex) and values
- Chip-Up (Color-Up) tracking per break
- Visual indicator on displays: "Color Up: Remove $25 chips"
- Total chips in play calculation
- Average stack display
### 6.2 Financial Engine
#### 6.2.1 Buy-In Configuration
| Field | Description |
|-------|-------------|
| Buy-in Amount | Cost to enter |
| Starting Chips | Chips received |
| Per-Player Rake | Amount removed per player (house revenue) |
| Fixed Rake | Flat fee from total pool |
| House Contribution | Amount house adds to pool |
| Bounty Cost | Separate bounty fee (if enabled) |
| Points | League points for buying in |
- Multiple rake categories (staff fund, league fund, house)
- Late registration cutoff (by level or time)
- Re-entry support (distinct from rebuy new entry after busting)
#### 6.2.2 Rebuys
- Configurable: cost, chips, rake, points
- Limits: max per player, level/time cutoff, chip threshold requirement
- Points can be negative (rebuy penalty in league scoring)
#### 6.2.3 Add-Ons
- Configurable: cost, chips, rake, points
- Availability window (typically at first break)
#### 6.2.4 Bounties
**Fixed Bounty (Chip Model):**
- Bounty cost added to buy-in, chip issued
- Hitman tracking: who eliminated whom (full chain)
- Bounty chips cashed out at tournament end
- "Restrict bounties" option
**Progressive Bounty (v1.1):**
- Half collected on elimination, half added to own head
- Mystery bounty variant
#### 6.2.5 Prize Pool & Payouts
- Auto-calculation from all financial inputs
- Guaranteed pot support (house covers shortfall)
- Payout structures: percentage, fixed, or custom table
- Rounding: to nearest $1, $5, $10, $100
- Chop/deal support: ICM calculator, chip-chop, even-chop, custom
- End-of-season withholding: optionally reserve rake portion for season prizes
#### 6.2.6 Receipts & Transactions
- Every financial action generates a receipt
- Full transaction log with search/filter
- Transaction editing with audit trail
- Receipt reprint capability
### 6.3 Player System
#### 6.3.1 Player Database
Persistent on Leaf (LibSQL), synced to Core (PostgreSQL).
- UUID-based IDs (globally unique for cross-venue)
- Fields: name, nickname, photo, email, phone, notes, custom fields
- League memberships
- Import from CSV
- Merge duplicates
- Search with typeahead
- QR code generation per player (for self-check-in)
#### 6.3.2 Tournament Operations
**Buy-In Flow:** Search/select player confirm details optional auto-seat receipt displays update
**Bust-Out Flow:** Select player select hitman bounty transfer auto-rank rebalance trigger displays update
**Undo:** Bust-out (with re-ranking), rebuy, add-on, buy-in full action history per player
**Per-Player Tracking:** Chip count, playing time, seat, moves, rebuys, add-ons, bounties, prize, points, net take, full timestamped action history
#### 6.3.3 Player Mobile (PWA)
Access via QR code scan. No login required.
**Views:**
- Live clock (blinds, timer, next level)
- Full blind schedule (current highlighted)
- Rankings (live bust-out order)
- Prize pool and payout structure
- Personal status (seat, points after PIN claim)
- League standings
- Upcoming tournaments
**Technical:**
- WebSocket for real-time updates
- Auto-reconnect with exponential backoff
- Fallback to 5s polling if WS fails
- "Add to Home Screen" PWA prompt
### 6.4 Table & Seating
#### 6.4.1 Configuration
- Tables with configurable seat counts (6-max to 10-max)
- Table names/labels
- Seat availability marking
- Table blueprints (save venue layout)
- Dealer button tracking
#### 6.4.2 Seating Management
- Random initial seating on buy-in (fills tables evenly)
- Automatic balancing algorithm:
- Table size difference threshold (configurable)
- Move count fairness (minimize repeat moves)
- Dealer button awareness
- Locked players (player/dealers)
- Break short tables first
- Drag-and-drop manual moves on touch interface
- "Break Table" action (dissolve and distribute)
- Shootout mode (no balancing until 1 per table)
#### 6.4.3 Seating Display
- Visual top-down table layout (player names in seats)
- List view (sortable)
- Movement screen (pending moves)
- Dealer button indicator
### 6.5 Display Management
#### 6.5.1 Display Node Registry
Operator panel showing all connected nodes:
| Column | Info |
|--------|------|
| Name | Operator-assigned |
| Status | Online / Offline / Stale |
| Resolution | Auto-detected |
| View | Current assignment |
| Group | Optional grouping |
Actions: assign view, rename, group, reboot, remove
#### 6.5.2 View Types
**Tournament:**
- Tournament Clock (large countdown, blinds, entries, avg stack)
- Player Rankings (bust-out order, prizes, bounties)
- Seating Chart (visual table layout)
- Blind Schedule (full schedule, current highlighted)
- Final Table (featured player display with chip counts)
- Player Movement (pending moves)
- Prize Pool (payout structure)
- Tournament Lobby (multi-tournament overview)
**General:**
- Welcome / Promo (venue branding, schedule, announcements)
- League Standings (season leaderboard)
- Upcoming Events (calendar)
**Cash Game (Phase 2 placeholder):**
- Cash Waitlist
- Cash Game Info
#### 6.5.3 Customization
- Theme system (not raw HTML clean theme picker)
- Pre-built themes (dark and light variants)
- Custom theme builder (colors, fonts, logo, background)
- Sponsor banner areas (configurable)
- Content toggles per view
- Auto font-scaling to resolution
#### 6.5.4 Screen Cycling & Routing
- Rotation config: Clock (30s) Rankings (15s) Clock...
- Conditional: show Rankings when player busts, return to Clock after 20s
- Override: force view on all/selected screens
- Lock: prevent cycling
- Multi-tournament routing: assign displays to specific tournaments or lobby
### 6.6 Events & Automation
#### 6.6.1 Triggers
`tournament.started`, `tournament.ended`, `level.ended`, `level.started`, `break.started`, `break.ended`, `player.busted`, `player.bought_in`, `player.rebought`, `tables.consolidated`, `final_table.reached`, `bubble.reached`, `timer.warning`, `timer.{N}_remaining`
#### 6.6.2 Actions
`play_sound`, `show_message` (overlay with duration/style), `change_view`, `flash_screen`, `change_theme`, `announce` (TTS), `webhook` (HTTP POST, when online), `run_command`
#### 6.6.3 Rule Builder
Visual builder: select trigger set conditions add actions. No code required.
```
WHEN level.ended
AND next_level.type == 'break'
AND next_level.chip_up == true
THEN
play_sound("chip_up.mp3")
show_message("Color Up: Remove {chip_up_denomination} chips", 30s)
```
### 6.7 League & Points
#### 6.7.1 Structure
- Leagues (named groups, cross-season)
- Seasons (time-bounded within a league)
- Players can belong to multiple leagues
- Tournaments assigned to league + season
#### 6.7.2 Points Formula Engine
Custom mathematical formulas with tournament variables:
**Variables:** `TOTAL_PLAYERS`, `UNIQUE_PLAYERS`, `PLAYER_PLACE`, `REBUYS`, `ADDONS`, `BOUNTIES`, `BUY_IN`, `PRIZE_WON`, `PLAYING_TIME`, `IS_WINNER`, `MADE_FINAL_TABLE`, `IN_THE_MONEY`
**Functions:** `sqrt()`, `pow()`, `max()`, `min()`, `round()`, `floor()`, `ceil()`, `abs()`, `log()`, `if(condition, true_val, false_val)`
**Testing:** Input test values, preview all placements, graph distribution curve
#### 6.7.3 Season Standings
- Cumulative points across tournaments
- Configurable: count all or best N of M, minimum attendance
- Historical archives
- Available as display node view
### 6.8 Export
- CSV (configurable columns)
- JSON (full tournament data)
- HTML (themed export with venue branding)
- Hendon Mob format (v1.2)
- Print (direct from operator UI)
### 6.9 TDD Data Import
A critical adoption tool. Venues migrating from The Tournament Director can import their entire history.
**Supported imports:**
- Blind structures and tournament templates (from TDD XML export)
- Player database (names, contact info, aliases, notes)
- Tournament history (results, payouts, bust-out order)
- League standings and season data
- Custom payout structures
**Import process:**
1. Venue exports data from TDD (File Export XML)
2. Felt import wizard reads the XML, shows preview of what will be imported
3. Venue confirms mapping (player name matching, template naming)
4. Import runs typically under 2 minutes for years of data
5. Venue's Felt instance now shows their complete history from day one
**Design goal:** Zero data loss on migration. A venue switching from TDD to Felt should never feel like they're starting over.
---
## 8. UI/UX Design System
### 7.1 Design Principles
| Principle | Meaning |
|-----------|---------|
| **Glanceable** | Critical info visible instantly. No hunting. Large type for key numbers. |
| **Touch-native** | 48px minimum tap targets. Swipe gestures. No hover-dependent UI. |
| **Information-dense, not cluttered** | Show a lot of data, but with clear hierarchy and whitespace. |
| **Dark-room ready** | Default dark theme designed for dim poker rooms. Low-glare, high contrast. |
| **Progressively complex** | Simple tasks are simple. Advanced config is available but not in your face. |
| **Consistent vocabulary** | Use poker terminology throughout. "Bust" not "eliminate". "Blinds" not "stakes". |
| **Instant feedback** | Every action shows immediate visual confirmation. Toasts, animations, state changes. |
### 7.2 Color System
Two built-in themes inspired by Catppuccin Mocha and Tokyo Night palettes, adapted for poker:
#### Felt Dark (Default — based on Catppuccin Mocha)
```
Background:
Base: #1e1e2e (main background)
Surface 0: #313244 (cards, panels)
Surface 1: #45475a (elevated elements, hover states)
Surface 2: #585b70 (borders, subtle dividers)
Text:
Primary: #cdd6f4 (main text — Catppuccin "Text")
Secondary: #a6adc8 (subdued labels — Catppuccin "Subtext 0")
Muted: #6c7086 (disabled, hints — Catppuccin "Overlay 0")
Accent:
Primary: #89b4fa (Catppuccin Blue — buttons, links, active states)
Success: #a6e3a1 (Catppuccin Green — confirmations, chip-up, bought-in)
Warning: #f9e2af (Catppuccin Yellow — timer warnings, approaching limits)
Danger: #f38ba8 (Catppuccin Red — bust-outs, errors, critical)
Info: #89dceb (Catppuccin Sky — informational, neutral actions)
Bounty: #f5c2e7 (Catppuccin Pink — bounty-related highlights)
Prize: #f9e2af (Catppuccin Yellow — money, winnings, prize pool)
Poker-specific:
Felt Green: #74c7b8 (table felt accent, used sparingly for poker context)
Card White: #f5f5f5 (card faces, receipt backgrounds)
```
#### Felt Light (Alternative)
```
Background:
Base: #eff1f5 (Catppuccin Latte Base)
Surface 0: #ffffff
Surface 1: #e6e9ef
Surface 2: #ccd0da
Text:
Primary: #4c4f69 (Catppuccin Latte Text)
Secondary: #6c6f85
Muted: #9ca0b0
Accent:
(Same hues as dark, slightly adjusted saturation for readability on light)
Primary: #1e66f5
Success: #40a02b
Warning: #df8e1d
Danger: #d20f39
```
#### Venue Customization
Venues can override:
- Primary accent color (their brand color)
- Logo (placed in header/footer of displays)
- Background image (for display nodes subtle, dimmed overlay)
The theme system generates appropriate contrast ratios automatically.
### 7.3 Typography
```
Headings: Inter (clean, professional, excellent at large sizes)
Body: Inter
Monospace: JetBrains Mono (timers, chip counts, blind values)
Timer display: JetBrains Mono, tabular figures, weight 700
Blind values: JetBrains Mono, tabular figures, weight 600
Player names: Inter, weight 500
Labels: Inter, weight 400, uppercase tracking +0.05em
```
**Scale:**
```
Display XL: 72px (tournament clock countdown on big screens)
Display L: 48px (blind values on display nodes)
Display M: 36px (next level blinds, player count)
H1: 28px (section headers on operator UI)
H2: 22px (subsection headers)
H3: 18px (card titles)
Body: 16px (default text)
Caption: 14px (labels, metadata)
Micro: 12px (timestamps, IDs)
```
### 7.4 Component System
#### Cards
Primary container for grouped information. Rounded corners (12px), subtle border (Surface 2), Surface 0 background.
```
┌─────────────────────────────────┐
│ Tournament: Friday $50 NL │ ← Card title (H3, Secondary color)
│ │
│ ▶ RUNNING Level 8/24 │ ← Status badge + progress
│ │
│ 150 / 300 (25) │ ← Blinds in monospace, large
│ │
│ 12:47 remaining │ ← Timer in monospace, Primary accent
│ │
│ 24 players │ Avg: 22,500 │ ← Stats row
└─────────────────────────────────┘
```
#### Badges / Status Pills
Rounded pill shapes with semantic colors:
- `▶ RUNNING` Success green background
- `⏸ PAUSED` Warning yellow background
- `✓ COMPLETE` Muted background
- `SETUP` Info blue background
#### Action Buttons
- **Primary:** Filled with Primary accent, rounded (8px), 48px min height
- **Secondary:** Outlined with Primary accent border
- **Danger:** Filled with Danger red (bust-out, delete, void)
- **Ghost:** Text-only, used for less important actions
All buttons have:
- 48px minimum touch target
- Press-state animation (subtle scale + darken)
- Loading state (spinner replacing text)
- Disabled state (muted, no interaction)
#### Toast Notifications
Slide in from top-right (desktop) or top-center (mobile):
- Success: "Player X bought in Table 3, Seat 7"
- Info: "Level 9 starting 200/400 (50)"
- Warning: "Rebuys close after this level"
- Error: "Action failed please retry"
Auto-dismiss after 4s, manually dismissible, stackable.
#### Data Tables
Used for player lists, transaction logs, standings:
- Alternating row backgrounds (Base / Surface 0)
- Sortable columns (tap header to sort)
- Sticky header on scroll
- Row actions via swipe (mobile) or hover menu (desktop)
- Search/filter bar above table
### 7.5 Operator UI Layout
#### Navigation
**Mobile/Tablet (primary):**
Bottom tab bar with 5 primary destinations + floating action button (FAB):
```
┌──────────────────────────────────────────┐
│ [Persistent Header: Clock + Status] │
│ │
│ [Content Area — scrollable] │
│ │
│ │
│ ┌───┐ │
│ │ + │ FAB │
│ └───┘ │
├──────────────────────────────────────────┤
│ Overview │ Players │ Tables │ $ │ ⚙️ │
└──────────────────────────────────────────┘
```
**Bottom Tabs:**
1. **Overview** dashboard with live stats
2. **Players** player list with actions
3. **Tables** seating chart
4. **Financials** ($) prize pool, transactions
5. **More** (⚙) displays, settings, events, export
**FAB (Floating Action Button):**
Tap to expand quick actions:
- Bust Player
- Buy In
- Rebuy
- Add-On
- Pause/Resume
**Persistent Header:**
Always visible, shows:
```
┌─────────────────────────────────────────────┐
│ ▶ 12:47 │ Lvl 8 NL Hold'em │ 150/300 (25)│
│ │ Break in 2 levels │ 24/36 plrs │
└─────────────────────────────────────────────┘
```
Tapping the header opens the full clock control panel.
#### Desktop/Laptop (secondary)
Sidebar navigation (collapsible) + wider content area. Same information architecture, more horizontal space used for side-by-side panels.
### 7.6 Display Node UI
Display nodes show purpose-built views optimized for large screens viewed from 3-15 feet away. They serve dual purposes: **tournament displays** during active tournaments and **digital signage** at all other times (or on screens not assigned to a tournament).
#### Design Rules for Display Views
1. **Minimum readable distance: 10 feet.** Timer digits must be legible from across the room.
2. **No interactivity.** These are pure read-only displays.
3. **No scrolling.** Everything must fit on one screen (content adapts to resolution).
4. **High contrast.** Even in a well-lit room, text must be immediately readable.
5. **Smooth transitions.** Level changes, bust-outs, and content rotations animate smoothly (no jarring state jumps).
6. **Venue branding.** Logo always present (configurable size/position) applied automatically to both tournament and info screen views.
7. **Tournament override.** When a tournament starts, assigned screens switch automatically. When it ends or breaks, they revert to their info screen playlist.
#### Tournament Clock (Primary Display View)
```
┌──────────────────────────────────────────────────────────────┐
│ [Venue Logo] Friday $50 NL Hold'em [Promo] │
│ │
│ LEVEL 8 of 24 │
│ No-Limit Hold'em │
│ │
│ │
│ ░░ 12 : 47 ░░ │
│ │
│ │
│ BLINDS ANTE NEXT LEVEL │
│ 150 / 300 25 200 / 400 (50) │
│ │
│ ┌──────────┬──────────┬──────────┬──────────────────────┐ │
│ │ ENTRIES │ REMAIN │AVG STACK │ PRIZE POOL │ │
│ │ 36 │ 24 │ 22,500 │ $1,800 │ │
│ └──────────┴──────────┴──────────┴──────────────────────┘ │
│ │
│ CHIPS IN PLAY: ● $25 ● $100 ● $500 ● $1000 │
│ │
│ Next break in 2 levels │ Color up $25 chips at break │
└──────────────────────────────────────────────────────────────┘
```
**Timer behavior:**
- Normal: white text on dark background
- Warning (configurable, e.g., <60s): text pulses amber
- Final 10s: text pulses red, size increases slightly
- Level change: smooth crossfade animation to new values
- Break: different visual treatment (e.g., tinted background, "BREAK" label replaces timer)
### 7.7 Player Mobile UI
Optimized for phones (320px - 428px width). Clean, fast, minimal.
```
┌─────────────────────────┐
│ Friday $50 NL Hold'em │
│ ▶ RUNNING │
│ │
│ 12 : 47 │ ← large, centered
│ │
│ ┌───────┬────────────┐ │
│ │ BLINDS│ 150 / 300 │ │
│ │ ANTE │ 25 │ │
│ │ NEXT │ 200 / 400 │ │
│ └───────┴────────────┘ │
│ │
│ 24 remain │ Avg 22.5k │
│ $1,800 prize pool │
│ │
│ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ │
│ 📋 Schedule 📊 Rank │ ← tab links
│ 💰 Payouts 🏆 League│
└─────────────────────────┘
```
---
## 9. Tech Stack
### Leaf Node
| Layer | Technology | Rationale |
|-------|------------|-----------|
| **Backend** | Go 1.22+ | Single binary, ARM cross-compile, goroutine concurrency |
| **Database** | LibSQL (embedded via `go-libsql`) | SQLite-compatible, replication-ready, no server process |
| **Message Queue** | NATS + JetStream (embedded) | Go-native, persistent queuing, ~10MB RAM |
| **WebSocket** | `nhooyr.io/websocket` | Clean API, context-aware, works well with Go stdlib |
| **HTTP** | `chi` router | Lightweight, idiomatic, composable middleware |
| **Operator UI** | SvelteKit static SPA | Small bundle, fast on Pi, excellent mobile/touch |
| **Display Views** | Vanilla HTML/CSS/JS | Zero framework overhead, instant load |
| **Player PWA** | SvelteKit (shared codebase) | PWA manifest, responsive, WebSocket integration |
| **Audio** | `mpv` subprocess or Go audio | Sound file playback on Leaf audio out |
| **Mesh** | Netbird (v0.65+) | WireGuard mesh, reverse proxy ingress, identity-aware SSH |
| **OS** | Armbian (Orange Pi) / Pi OS Lite (RPi) 64-bit | Minimal, well-supported for target SBCs |
| **Process Mgmt** | systemd | Auto-start, restart on crash, logging |
### Display Node
| Layer | Technology |
|-------|------------|
| **OS** | Raspberry Pi OS Lite 64-bit (Pi Zero only) |
| **Browser** | Chromium `--kiosk --noerrdialogs --disable-infobars` |
| **Network** | wpa_supplicant + Netbird |
| **Boot** | systemd Chromium `http://leaf.local/display/{id}` |
### Core (Cloud — Hetzner)
| Layer | Technology | Rationale |
|-------|------------|-----------|
| **Backend** | Go (shared codebase, different build) | Same language as Leaf, shared types/logic |
| **Database** | PostgreSQL 16 | Master of record, shared with Authentik |
| **Identity** | Authentik (self-hosted) | OIDC/OAuth2 IdP, shared by Netbird + Felt |
| **Message Hub** | NATS Server (clustered) | Receives Leaf sync streams |
| **API** | REST + WebSocket | Admin dashboard, venue management, public API |
| **Frontend** | SvelteKit (SSR for public, SPA for admin) | Shared codebase with Leaf operator UI |
| **Infrastructure** | Netbird unified server (v0.65+) | WireGuard mesh, reverse proxy (replaces nginx), DNS zones, SSH, traffic logging |
| **Hosting** | Hetzner dedicated server (Proxmox VE) | LXC containers + PostgreSQL VM |
---
## 10. Display Node Protocol
### WebSocket Connection
Display nodes connect: `ws://leaf.local/ws/display/{node-id}`
**State Update (Leaf → Node):**
```json
{
"type": "state",
"view": "tournament_clock",
"tournament_id": "abc123",
"ts": 1709042847000,
"data": {
"clock": { "remaining_ms": 847000, "paused": false, "level": 8 },
"blinds": { "sb": 150, "bb": 300, "ante": 25, "game": "No-Limit Hold'em" },
"next": { "sb": 200, "bb": 400, "ante": 50, "type": "round" },
"stats": { "entries": 36, "remaining": 24, "avg_stack": 22500, "prize_pool": 1800 },
"chips": [{ "value": 25, "color": "#2ecc71" }, { "value": 100, "color": "#000" }],
"chip_up": { "active": false, "denomination": 25, "at_next_break": true }
}
}
```
**View Assignment (Leaf → Node):**
```json
{
"type": "assign",
"view": "player_rankings",
"tournament_id": "abc123",
"theme": "felt_dark",
"config": { "show_bounties": true, "max_rows": 20 },
"cycle": { "next_view": "tournament_clock", "after_seconds": 15 }
}
```
**Heartbeat (Node → Leaf):**
```json
{
"type": "heartbeat",
"node_id": "dn-001",
"resolution": { "w": 1920, "h": 1080 },
"uptime": 3847
}
```
**Update Frequency:**
- Normal: 1/sec
- Final 10 seconds: 10/sec
- Paused: on-change only
- Player action: immediate push
- Reconnect: full state dump
### Digital Signage & Info Screens
Display nodes are not just tournament tools they are the venue's entire digital signage system. Between tournaments (or on screens not assigned to tournament views), displays show venue-managed content: upcoming events, drink specials, sponsor ads, league standings, custom announcements, or any content the venue creates.
**Content Types:**
| Type | Description | Example |
|------|-------------|---------|
| `info_card` | Full-screen rich content card | "Friday Night 50 Freezeout Starts at 8PM" |
| `event_promo` | Upcoming event with countdown | "PLO Night in 3 days 12 seats remaining" |
| `sponsor_ad` | Sponsor/partner branding | Logo + message, time-limited display |
| `menu_board` | Drink/food specials | "Happy Hour 6-8PM 3 Draft Beer" |
| `league_table` | Current league standings | Top 20 players, points, events played |
| `custom_html` | Freeform HTML/CSS content | Anything the venue wants to show |
| `media` | Image or video playback | Venue branding video, photo slideshow |
**WYSIWYG Editor with AI Assist:**
The operator UI includes a visual content editor for creating and managing info screen content. This is not a code editor it's a drag-and-drop canvas with templates, venue branding auto-applied, and AI assistance for generating professional-looking content from simple prompts.
Key capabilities:
- **Template gallery:** Pre-designed layouts for events, promos, menus, announcements all customizable
- **Venue branding:** Logo, colors, fonts auto-applied from venue profile. Every screen looks on-brand without effort.
- **AI content generation:** Operator types "Friday PLO night, 50 buy-in, 8PM start, max 40 players" AI generates a polished promo card with layout, typography, and imagery. Operator tweaks and publishes.
- **AI image generation:** Generate thematic background images, poker-themed graphics, event artwork from text prompts
- **Rich text editing:** Headlines, body text, images, QR codes, countdowns, live data widgets (current league leader, next event countdown)
- **Scheduling:** Content assigned to time slots. "Show drink specials 6-8PM, then switch to tomorrow's tournament promo." Automated playlists with priority and fallback content.
- **Live data widgets:** Embed live data in info screens league standings update automatically, event seat counts reflect real registrations, countdowns tick in real-time
- **Per-screen assignment:** Different screens can show different content simultaneously. Bar TV shows drink specials while lobby TV shows event schedule.
**Content Delivery:**
Info screen content is stored on the Leaf as static HTML/CSS/JS bundles. The WYSIWYG editor outputs a content bundle that the display node renders in its Chromium kiosk same pipeline as tournament views, zero additional infrastructure.
```json
{
"type": "assign",
"view": "info_screen",
"content_id": "promo_friday_plo",
"config": {
"duration_seconds": 30,
"transition": "fade"
},
"cycle": {
"playlist": ["promo_friday_plo", "drinks_happy_hour", "league_standings_q1"],
"loop": true,
"override_on_tournament": true
}
}
```
**Tournament Override:** When a tournament is active, screens assigned to tournament views automatically switch. When the tournament ends or breaks, they revert to their info screen playlist. Zero operator intervention needed.
**Sync to Core:** Info screen content bundles sync to Core like any other venue data. A multi-venue operator can create a promo on Core and push it to all venues simultaneously. Sponsors can have their ads deployed across the network from a single dashboard.
---
## 11. API Design
### REST (Operator)
Prefix: `/api/v1`
**Tournaments:**
```
POST /tournaments Create
GET /tournaments List (active + recent)
GET /tournaments/:id Get full state
PUT /tournaments/:id Update config
POST /tournaments/:id/start Start
POST /tournaments/:id/pause Pause
POST /tournaments/:id/resume Resume
POST /tournaments/:id/advance Next level
POST /tournaments/:id/rewind Previous level
POST /tournaments/:id/jump/:level Jump to level
POST /tournaments/:id/end End
```
**Players (in tournament):**
```
POST /tournaments/:id/players/buyin Buy in
POST /tournaments/:id/players/:pid/bust Bust out
POST /tournaments/:id/players/:pid/rebuy Rebuy
POST /tournaments/:id/players/:pid/addon Add-on
POST /tournaments/:id/players/:pid/undo Undo last action
PUT /tournaments/:id/players/:pid/chips Update chip count
PUT /tournaments/:id/players/:pid/seat Move seat
GET /tournaments/:id/players List
GET /tournaments/:id/rankings Rankings
```
**Tables:**
```
GET /tournaments/:id/tables Tables + seating
POST /tournaments/:id/tables/suggest Balancing suggestions
POST /tournaments/:id/tables/move Execute move
POST /tournaments/:id/tables/break/:tid Break a table
```
**Display Nodes:**
```
GET /displays List
PUT /displays/:id Update assignment
POST /displays/:id/reboot Reboot
DELETE /displays/:id Remove
```
**Content / Info Screens:**
```
GET /content List all content items
POST /content Create (from WYSIWYG editor output)
GET /content/:id Get content item + bundle
PUT /content/:id Update
DELETE /content/:id Delete
POST /content/:id/publish Publish to assigned displays
POST /content/generate AI-assisted content generation
GET /content/templates List available templates
GET /playlists List playlists
POST /playlists Create playlist
PUT /playlists/:id Update playlist (items, schedule, priority)
DELETE /playlists/:id Delete playlist
PUT /displays/:id/playlist Assign playlist to display
```
**Player Database:**
```
GET /players List (search, filter, paginate)
POST /players Create
PUT /players/:id Update
DELETE /players/:id Delete
POST /players/import CSV import
```
**TDD Import:**
```
POST /import/tdd Upload TDD XML export
GET /import/tdd/:id/preview Preview import (show what will be imported)
POST /import/tdd/:id/confirm Confirm and execute import
GET /import/tdd/:id/status Import progress/status
```
**Leagues:**
```
GET /leagues List
POST /leagues Create
GET /leagues/:id/standings Season standings
GET /leagues/:id/seasons Seasons
```
### WebSocket Channels
```
/ws/operator Operator (all tournaments, full control)
/ws/display/{node-id} Display node (assigned view data)
/ws/tournament/{tournament-id} Player mobile (read-only tournament data)
```
---
## 12. Security & Threat Model
**Security is a foundational design constraint, not a feature.** One breach one leaked player database, one manipulated payout, one compromised Leaf kills the project before it gets traction. Every architectural decision in this document has been evaluated against the threat model below.
### 12.1 Threat Model
#### Assets to Protect
| Asset | Sensitivity | Impact if Compromised |
|-------|-------------|----------------------|
| Player PII (name, email, phone) | **HIGH** GDPR Article 4 personal data | Mandatory breach notification, regulatory fines, loss of trust |
| Financial records (buy-ins, payouts, rake) | **HIGH** real money flows | Fraud allegations, venue liability, loss of all venues |
| Tournament results & standings | **MEDIUM** competitive integrity | Player disputes, league credibility destroyed |
| Operator credentials | **HIGH** control access | Unauthorized tournament manipulation |
| Leaf node (physical device) | **MEDIUM** venue hardware | Data extraction, impersonation |
| Core infrastructure | **CRITICAL** all venues | Total platform compromise |
| API keys & sync credentials | **HIGH** inter-system trust | Unauthorized data access, injection |
#### Threat Actors
| Actor | Motivation | Capability |
|-------|-----------|------------|
| Disgruntled player | Manipulate standings, access others' data | Low network-level, social engineering |
| Venue insider (rogue operator) | Skim rake, alter payouts | Medium physical access to Leaf, valid credentials |
| Competitor | Data theft, service disruption | Medium targeted attacks |
| Opportunistic attacker | Data theft, ransomware | Medium automated scanning, known CVEs |
| State actor | Not a realistic threat for this domain | N/A |
#### Attack Surfaces
```
1. Leaf local network (venue WiFi / LAN)
- Player phones on same network as Leaf
- Display nodes on same network
- Potential rogue devices
2. Core cloud infrastructure (Hetzner)
- Public API endpoints (player PWA, venue pages)
- NATS sync ingress from Leaf nodes
- Authentik login endpoints
3. Physical devices (Leaf SBC, display nodes)
- SD card / NVMe extraction
- USB port access
- Network cable interception
4. Supply chain
- Felt OS image integrity
- Dependency vulnerabilities (Go modules, npm packages)
- Update mechanism compromise
```
### 12.2 Identity & Authentication Architecture
#### Authentik (Centralized IdP)
Authentik is the single source of identity truth, serving both infrastructure (Netbird) and application (Felt) through OIDC.
```
┌─────────────────────────────────────────────────────────────┐
│ Authentik (Self-Hosted IdP) │
│ PostgreSQL-backed, Docker │
│ https://auth.felt.io │
│ │
│ ┌──────────────────┐ ┌──────────────────────────────┐ │
│ │ OIDC Provider: │ │ OIDC Provider: │ │
│ │ Netbird │ │ Felt Application │ │
│ │ (infra mesh) │ │ (operator + admin + API) │ │
│ └────────┬─────────┘ └──────────────┬───────────────┘ │
└───────────┼──────────────────────────────┼───────────────────┘
│ │
▼ ▼
┌───────────────────────┐ ┌──────────────────────────────────┐
│ Netbird Mesh │ │ Felt Application Auth │
│ │ │ │
│ • Leaf ↔ Core │ │ Operator (venue): │
│ • Leaf ↔ Display Nodes│ │ PIN login → local JWT │
│ • Admin ↔ any Leaf │ │ (works offline, no IdP needed) │
│ │ │ │
│ Setup keys for nodes │ │ Core Admin Dashboard: │
│ OIDC for operators │ │ OIDC via Authentik │
│ │ │ (SSO, MFA mandatory) │
│ │ │ │
│ │ │ API (Leaf ↔ Core sync): │
│ │ │ mTLS + API key │
│ │ │ validated with JWKS │
│ │ │ │
│ │ │ Player Mobile: │
│ │ │ Public (no auth for views) │
│ │ │ PIN claim for personal data │
└───────────────────────┘ └──────────────────────────────────┘
```
**Why Authentik:**
| Requirement | Authentik |
|-------------|-----------|
| Self-hosted, open source | Apache 2.0, Docker-native |
| OIDC provider for Netbird | First-class integration, documented |
| OIDC provider for app auth | Full OAuth2/OIDC/SAML |
| Custom auth flows | Visual flow editor (login, enrollment, recovery) |
| MFA | TOTP, WebAuthn/FIDO2, SMS |
| PostgreSQL backend | Shares existing Core PostgreSQL |
| Lightweight | ~200MB RAM (vs Keycloak's 512MB+ JVM) |
| License | Apache 2.0 (vs Zitadel's AGPL-3.0) |
**Rejected:** Keycloak (Java/heavy), Zitadel (AGPL license friction for commercial distribution), Authelia (not a full IdP).
#### Auth by Context
| Context | Auth Method | Offline? | MFA? |
|---------|------------|----------|------|
| **Operator on Leaf** | PIN local JWT (bcrypt hash in LibSQL) | Yes | Optional (TOTP when online) |
| **Operator SSO** | OIDC via Authentik (when Leaf has internet) | No | Required for Admin role |
| **Core Admin** | OIDC via Authentik (mandatory) | No | Required |
| **Leaf ↔ Core sync** | mTLS certificate + API key per venue | Queues offline | N/A |
| **Display nodes** | Netbird setup key (auto-enrolled) | Connects to Leaf | N/A |
| **Player mobile (public views)** | None read-only tournament data | Direct to Leaf | N/A |
| **Player mobile (personal data)** | 6-digit PIN claim | Direct to Leaf | N/A |
#### Operator Roles
| Role | Scope | Permissions |
|------|-------|------------|
| **Admin** | Venue | Full control: config, tournaments, players, financials, displays, settings |
| **Floor** | Tournament | Runtime actions: bust, rebuy, add-on, seat moves, clock control |
| **Viewer** | Tournament | Read-only: see all data, take no actions |
| **Platform Admin** | All venues (Core only) | Full control of all venues, user management, system config |
| **Venue Owner** | Own venue (Core only) | View own venue data, reports, analytics |
| **League Admin** | League (Core only) | League/season config, cross-venue standings |
### 12.3 Network Security
#### Zero-Trust Mesh (Netbird v0.65+)
**Principle:** Deny-by-default. Every connection must be explicitly permitted by policy.
**Peer Groups & Access Policies:**
| Group | Members | Can Reach | Protocol |
|-------|---------|-----------|----------|
| `leaf-nodes` | All venue Leaf SBCs | `core-services` (NATS, API) | TCP 4222, 443 |
| `leaf-nodes` | All venue Leaf SBCs | Own `display-nodes` | TCP 80, 443 |
| `display-nodes` | All Pi Zero display endpoints | Own venue `leaf-node` only | TCP 80 |
| `core-services` | Core API, NATS, PostgreSQL | `leaf-nodes` (for push/response) | TCP 4222, 443 |
| `admin-devices` | Operator devices (OIDC auth) | Any `leaf-node`, `core-services` | TCP 80, 443, 22 |
| `backup-targets` | Home lab PBS | Reachable by `core-services` only | TCP 8007 |
**Network isolation enforced:**
- Display nodes: can ONLY reach their own venue's Leaf. Not the internet. Not other display nodes. Not the Core.
- Display nodes: **Firewall drop-all-inbound** enabled (Netbird setting) outbound-only posture, no one connects *to* a display node.
- Leaf nodes: can reach Core (sync) and own display nodes. Nothing else.
- Backup: Core can push to home PBS. PBS cannot initiate connections to anything.
- All traffic: WireGuard encrypted (Netbird transport layer).
- **Lazy connections** enabled for Leaf Core tunnels activate on-demand, reducing idle resource usage at scale (critical for 500+ venues).
**Enrollment:**
- Leaf nodes: single-use setup key, auto-assigns `leaf-nodes` group + venue tag
- Display nodes: reusable venue-scoped setup key, auto-assigns `display-nodes` + venue tag
- Admin devices: interactive OIDC login via Authentik
- **Auto-updates:** Netbird agent on all peers updated from management dashboard (Windows/macOS automatic, Linux via fleet management)
**Custom DNS Zone (felt.internal):**
Service discovery within the mesh is managed by Netbird's custom DNS zones (v0.63+), eliminating hardcoded IPs:
| Record | Target | Distributed To |
|--------|--------|----------------|
| `core-api.felt.internal` | Core API LXC Netbird IP | `felt-infra` group |
| `nats.felt.internal` | NATS LXC Netbird IP | `felt-infra` group |
| `postgres.felt.internal` | PostgreSQL VM Netbird IP | `felt-infra` group |
| `auth.felt.internal` | Authentik LXC Netbird IP | `felt-infra` group |
| `pbs.felt.internal` | Home PBS Netbird IP | `felt-infra` group |
Search domain enabled services reference `nats.felt.internal` (or just `nats`) instead of IPs. Service migration = DNS record update, zero config changes on consumers.
**Identity-Aware SSH (v0.60+):**
Admin SSH to Leaves uses Netbird's native OpenSSH integration with OIDC authentication:
| IdP Group (Authentik) | SSH Target | Local OS User | Access |
|----------------------|------------|---------------|--------|
| `felt-platform-admins` | Any `leaf-node`, `core-services` | `felt` | Full admin |
| `venue-operators` | Own venue `leaf-node` only | `felt-readonly` | Read-only (logs, status) |
| All others | Denied | | |
Benefits: No SSH key distribution, no `.authorized_keys` management. Onboarding = add to Authentik group. Offboarding = remove from group (instant revocation). Browser-based SSH from Netbird dashboard for emergency maintenance (no laptop needed).
**Traffic Events Logging:**
All connections between Netbird peers are logged with source, destination, protocol, and timestamp. This feeds into the security audit system for anomaly detection (e.g., a Leaf suddenly connecting to unusual peers, a display node initiating unexpected outbound connections).
#### Leaf Local Network Hardening
The Leaf sits on a venue's LAN alongside player phones and other devices. It must be hardened:
- **Firewall (nftables):** Only allow inbound on ports 80 (HTTP), 443 (HTTPS/WSS), and Netbird's UDP port. Drop everything else.
- **No SSH by default.** SSH only accessible via Netbird mesh (admin-devices group, identity-aware OIDC auth). Never exposed on LAN.
- **API rate limiting:** Per-IP rate limits on all endpoints. Player PWA endpoints limited to 60 req/min. Operator endpoints limited to 300 req/min.
- **WebSocket origin validation:** Only accept WS connections from known origins (felt.local, felt.io, *.felt.io).
- **mDNS only for discovery.** Leaf advertises `felt.local` via mDNS for convenience but does not trust mDNS for authentication.
#### Core Public Endpoints
The Core exposes public HTTPS endpoints through Netbird's built-in reverse proxy:
- **TLS managed by Netbird proxy.** Automatic Let's Encrypt certificates per subdomain, or static certs with hot-reload.
- **HSTS with preload.** Force HTTPS everywhere.
- **Authentication at proxy layer.** SSO (Authentik), PIN, password per-service, configurable in Netbird dashboard.
- **Rate limiting:** Configurable per proxy service.
- **DDoS:** Hetzner's built-in DDoS protection + Netbird proxy connection limits.
- **No public database access.** PostgreSQL listens only on Netbird mesh IP (`postgres.felt.internal`), never on public interface.
- **No direct Leaf exposure.** Leaves are only reachable through Netbird mesh the reverse proxy tunnels traffic via WireGuard, the Leaf has no public IP and no open ports.
### 12.4 Data Security
#### Encryption
| Layer | Method |
|-------|--------|
| Data in transit (Leaf Core) | WireGuard (Netbird) + TLS 1.3 |
| Data in transit (Player Leaf via proxy) | TLS 1.3 (HTTPS to Netbird proxy) + WireGuard (proxy to Leaf) |
| Data in transit (Player Leaf on LAN) | HTTP (acceptable on local network, upgrade to HTTPS for sensitive ops) |
| Data in transit (Admin SSH Leaf) | WireGuard (Netbird) + SSH encryption (OIDC auth) |
| Data at rest (Leaf NVMe) | LUKS full-disk encryption (key sealed to TPM if SBC supports, otherwise passphrase at boot) |
| Data at rest (Core PostgreSQL) | LUKS volume encryption on VM disk |
| Data at rest (PBS backups) | PBS built-in AES-256-GCM encryption (key not stored on backup server) |
| Secrets in config | Age-encrypted or SOPS, never plaintext in repo |
#### Multi-Tenancy Isolation
Every data query on the Core is scoped by `venue_id`. This is enforced at multiple layers:
1. **API middleware:** Extract venue_id from authenticated context (JWT claim). Inject into every database query.
2. **Database:** Row-Level Security (RLS) policies on PostgreSQL tables. Even if application code has a bug, the database rejects cross-venue queries.
3. **NATS:** Subject namespacing. `venue.{id}.*` a Leaf can only publish/subscribe to its own venue's subjects. Enforced by NATS authorization.
4. **Audit:** Every cross-venue data access attempt is logged and alerted.
```sql
-- PostgreSQL RLS example
ALTER TABLE tournaments ENABLE ROW LEVEL SECURITY;
CREATE POLICY venue_isolation ON tournaments
USING (venue_id = current_setting('app.current_venue_id')::uuid);
-- Application sets context per request:
-- SET LOCAL app.current_venue_id = '{venue-uuid}';
```
#### PII Handling (GDPR)
- **Minimization:** Only collect what's needed. Photo is optional. Phone is optional.
- **Purpose limitation:** Player data used only for tournament operations and league standings.
- **Right to erasure:** Player deletion cascades: anonymize tournament entries (keep aggregate stats, remove PII), delete from player database, propagate deletion via NATS to Core.
- **Right to export:** Player can request full data export (all tournaments, standings, transactions) in JSON.
- **Consent:** Explicit opt-in for cloud sync of player data. Local-only operation requires no consent beyond venue's own policies.
- **Data retention:** Configurable per venue. Default: active data indefinitely, soft-deleted data purged after 90 days.
- **Breach notification:** If detected, notify all affected venues within 24 hours (regulatory requirement is 72 hours to DPA).
### 12.5 Physical Security
#### Leaf Node Theft/Tampering
If someone steals the Leaf SBC from a venue:
- **LUKS encryption:** NVMe is encrypted at rest. Without the passphrase, data is inaccessible.
- **Netbird key revocation:** Operator (or platform admin) revokes the Leaf's Netbird peer. It can no longer reach Core or any display nodes.
- **API key rotation:** Rotate the venue's API key in Core. Old Leaf can't sync.
- **Data on Core is unaffected.** Core has the complete sync'd copy. New Leaf can be provisioned and restored from Core data.
#### Display Node Theft
Display nodes contain zero sensitive data. They're stateless browsers. Steal one, get a Pi Zero that tries to connect to a Leaf it can no longer reach. Revoke the Netbird setup key for the venue, re-enroll remaining nodes.
### 12.6 Application Security
#### Input Validation
- All API inputs validated with strict schemas (Go struct tags + custom validators)
- SQL injection: impossible parameterized queries only, LibSQL and PostgreSQL prepared statements
- XSS: SvelteKit auto-escapes by default. Display views use textContent, never innerHTML.
- CSRF: SameSite cookies + CSRF tokens on state-changing requests
- Formula engine (league points): sandboxed evaluator with whitelist of functions. No eval(). No arbitrary code execution.
#### Dependency Management
- Go: `go.sum` for integrity verification. Dependabot or Renovate for update alerts.
- Node (SvelteKit): `package-lock.json` pinned. Audit on every build.
- OS: Unattended security updates on all LXC/VM. Leaf OS has read-only root with overlay for config.
- Container images: pinned to digest, not tag. Rebuilt on CVE alerts.
#### Audit Trail
Every action that modifies state produces an audit record:
```json
{
"id": "audit_01HQ...",
"timestamp": "2026-02-27T21:32:15.847Z",
"venue_id": "venue_cph_01",
"tournament_id": "tourn_friday_50",
"operator_id": "op_jane",
"operator_role": "floor",
"action": "player.bust",
"target": { "player_id": "plr_mikkel", "busted_by": "plr_thomas" },
"previous_state": { "status": "active", "seat": "T3-S7" },
"new_state": { "status": "busted", "finish_position": 12 },
"ip_address": "192.168.1.42",
"user_agent": "Mozilla/5.0 (iPad; ...)"
}
```
Audit records are:
- Append-only (no delete, no update)
- Synced to Core via NATS (immutable once synced)
- Retained indefinitely
- Available for export (compliance, disputes)
#### Incident Response
- All security events (failed auth, rate limit hits, unusual access patterns) logged to structured logs
- Alerting via webhook (NATS publish to monitoring subject Core forwards to notification channel)
- Runbook for common scenarios: compromised Leaf, leaked API key, unauthorized access, GDPR request
### 12.7 Player Access via Netbird Reverse Proxy
Players access tournament data from any network. No VPN client, no app install, no relay service.
**How it works:**
Netbird's built-in reverse proxy (v0.65+) exposes each Leaf's player-facing endpoints to the internet through an encrypted WireGuard tunnel. Players hit a public HTTPS URL Netbird proxy terminates TLS tunnels through WireGuard arrives at the Leaf's local HTTP/WebSocket server. The player never knows they're going through a tunnel.
```
Player scans QR code or opens bookmark
|
https://tournament.venue-name.felt.io/t/{tournament-id}
|
Netbird reverse proxy (on Core, automatic TLS via Let's Encrypt)
|
WireGuard tunnel (encrypted, direct to Leaf peer)
|
Leaf Go backend serves HTTP + WebSocket directly
|
Player sees live clock, blinds, rankings in real-time
```
**Netbird proxy configuration per venue (dashboard or API):**
| Service | Subdomain | Target | Auth |
|---------|-----------|--------|------|
| Player tournament views | `play.venue-name.felt.io` | Leaf peer, port 80, path `/t/*` | None (public) |
| Player personal data | `play.venue-name.felt.io` | Leaf peer, port 80, path `/me/*` | PIN |
| Operator remote access | `admin.venue-name.felt.io` | Leaf peer, port 80, path `/*` | SSO (Authentik) |
| Core admin dashboard | `dashboard.felt.io` | Core API LXC, port 80 | SSO (Authentik, MFA) |
**QR Code Strategy:**
- QR codes always encode: `https://play.venue-name.felt.io/t/{tournament-id}`
- This URL always works from anywhere in the world
- On venue WiFi, DNS may optionally resolve `felt.local` for lowest latency (direct to Leaf, no proxy)
- Player PWA can still try local-first: attempt `ws://felt.local/ws/tournament/{id}` with 2s timeout, fall back to the proxy URL
- **No custom relay code.** Netbird proxy handles TLS, tunneling, and auth. The Leaf serves the same HTTP/WebSocket it always does.
**What this eliminates from the codebase:**
- `internal/sync/relay.go` no Core relay service
- Core WebSocket hub for player connections not needed
- NATS consumer for state mirroring to Core not needed for player access (still needed for data sync)
- Smart routing JavaScript in player PWA simplified to single URL with optional local-first optimization
**Security:**
- Netbird proxy enforces auth per service (SSO, PIN, password configurable in dashboard)
- Traffic is end-to-end encrypted (TLS to proxy, WireGuard to Leaf)
- Rate limiting configurable at the proxy layer
- Leaf never exposed directly to the internet only reachable through Netbird mesh
- If Leaf is offline, proxy returns 502 appropriate because tournament isn't running anyway
**Scaling:**
- Multiple Netbird proxy instances with the same `NB_PROXY_DOMAIN` form a cluster automatically
- Static cert hot-reload supported (no restart needed for cert rotation)
- Proxy is stateless sessions managed via JWT
---
## 13. Deployment & Operations
### Initial Setup
**Core (one-time):**
```
1. Deploy Authentik on Hetzner (Docker Compose, shares PostgreSQL)
2. Configure Authentik: create Felt OIDC provider + Netbird OIDC provider
3. Deploy Netbird unified server (connected to Authentik for SSO)
- Enable built-in reverse proxy (Traefik TLS passthrough)
- Configure wildcard DNS: *.felt.io → Netbird server
- Configure wildcard DNS: *.proxy.felt.io → Netbird server (if separate)
4. Create Netbird peer groups and zero-trust access policies
5. Create Netbird custom DNS zone: felt.internal
- core-api.felt.internal, nats.felt.internal, postgres.felt.internal, etc.
6. Configure Netbird reverse proxy services:
- dashboard.felt.io → Core API (SSO + MFA)
- auth.felt.io → Authentik (public)
7. Deploy Core Go service + NATS server
8. Generate venue setup key (Netbird) + API key (Felt) per venue
```
**Venue Leaf (per venue):**
```
1. Flash NVMe SSD with Felt OS image (Armbian-based)
2. Insert SSD into Orange Pi 5 Plus (or compatible SBC)
3. Boot, connect Ethernet
4. Access http://felt.local → Setup Wizard:
- Venue name + branding (slug used for subdomain)
- WiFi credentials (for display nodes)
- Netbird setup key (enrolls Leaf into mesh)
- Felt API key (connects to Core sync)
- Operator PINs (local auth)
5. Leaf auto-connects to Core via Netbird mesh
6. Core auto-creates Netbird reverse proxy services for this venue:
- play.venue-name.felt.io → Leaf (public, player views)
- admin.venue-name.felt.io → Leaf (SSO, operator remote access)
7. Venue is live — players can access from anywhere immediately
```
**Display Nodes (per venue):**
```
1. Flash microSD with Felt Display image
2. Insert into Pi Zero 2 W, plug HDMI into TV
3. On first boot: connects to venue WiFi
4. Netbird enrolls via setup key → joins venue display group
5. Chromium kiosk opens → connects to Leaf
6. Appears as "Unassigned" in operator Display panel
7. Operator names it and assigns a view
```
### Updates
- Leaf: auto-check + OTA update (with rollback)
- Display Nodes: updated from Leaf push
- One-tap rollback to previous version
### Monitoring
- Leaf health: CPU, RAM, disk, connected clients
- Display node status: online/offline, heartbeat, uptime
- Full operation log: all actions, all operators, timestamped
---
## 14. Roadmap
### Phase 1: Live Tournament System
**v1.0 — Core**
- Tournament engine (clock, blinds, chips)
- Financial engine (buy-in, rebuy, add-on, bounties, payouts)
- Player management (database, tournament operations)
- Table and seating (config, random seat, auto-balance)
- Operator UI (mobile-first, Felt Dark theme)
- Display management (node registry, view assignment, real-time)
- Digital signage system (info screens, event promos, sponsor ads, playlists, auto-scheduling)
- WYSIWYG content editor with AI assist (template gallery, AI-generated promo cards and imagery)
- Player mobile PWA (clock, blinds, rankings)
- Player access via Netbird reverse proxy (public HTTPS WireGuard Leaf, zero custom relay)
- Multi-tournament support
- League and season management with formula engine
- Events engine (triggers, sounds, messages)
- Export (CSV, JSON, HTML)
- NATS-based sync queue (Leaf Core when online)
- Authentik IdP (shared by Netbird mesh + Core admin)
- Netbird infrastructure layer:
- WireGuard mesh (Leaf Core Display Nodes, zero-trust)
- Built-in reverse proxy (player + operator access to Leaf via HTTPS)
- Custom DNS zone (felt.internal for service discovery)
- Identity-aware SSH (admin access to Leaves via Authentik)
- Firewall policies (drop-inbound on display nodes)
- Lazy connections (on-demand tunnels at scale)
**v1.1 — Polish**
- Progressive / mystery bounties
- ICM calculator for chops
- Advanced view theming + custom theme builder
- Tournament templates and profiles
- Player self-check-in via QR
- Blind structure wizard
- Display node screen cycling with conditions
- Achievement/badge system
**v1.2 — Integrations**
- Hendon Mob export
- Cloud admin dashboard
- Remote venue management
- Public venue page
### Phase 2: Cash Game Management
### Phase 3: Full Venue System
### Phase 4: Native Apps & Platform Maturity
---
## 15. Appendix: TDD Feature Parity Matrix
| TDD Feature | Felt | Delta |
|-------------|------|-------|
| Tournament Clock | Parity+ | Multi-device sync, ms precision |
| Blind Structure | Parity+ | Wizard, templates, mixed games |
| Buy-in / Rebuy / Add-on | Parity | Full financial model |
| Bounty System | Parity+ | Fixed + progressive |
| Prize Pool / Payouts | Parity+ | ICM, chop, season withholding |
| Receipts | Parity | Digital with audit trail |
| Player Database | Parity+ | Cloud sync, photos, QR |
| Bust-Out / Undo | Parity | Full action history |
| Table Balancing | Parity | Same algorithm factors |
| Seating Chart | Parity+ | Multi-screen, touch drag-drop |
| Display System | Surpasses | Multi-screen, per-display routing, no HDMI |
| Events Engine | Parity+ | Visual builder, more triggers |
| Sounds | Parity | Audio on Leaf output |
| League/Points | Parity+ | Same formulas + `if()` + achievements |
| Stats/Export | Parity | CSV, JSON, HTML, Hendon Mob |
| Multi-Display | Surpasses | Unlimited displays, no cables |
| Hotkeys | 🔄 Different | Touch UI replaces keyboard shortcuts |
| HTML Layout Editor | 🔄 Different | Theme system replaces raw HTML |
| Windows Desktop | N/A | Web-based, Pi different platform |
| Player Mobile | New | TDD has nothing comparable |
| Multi-Tournament | New | TDD is single-tournament |
| Offline Resilience | New | Local-first with async cloud sync |
| Cash Game | 📅 Phase 2 | Not in TDD |
| Venue Mgmt | 📅 Phase 3 | Not in TDD |
---
*Living document. Project built with Claude Code.*