homelab/.planning/codebase/INTEGRATIONS.md
Mikkel Georgsen a639a53b0b docs: add codebase map and domain research
Codebase: 7 documents (stack, architecture, structure, conventions, testing, integrations, concerns)
Research: 5 documents (stack, features, architecture, pitfalls, summary)
2026-02-04 13:50:03 +00:00

261 lines
8.8 KiB
Markdown

# External Integrations
**Analysis Date:** 2026-02-04
## APIs & External Services
**Hypervisor Management:**
- **Proxmox VE (PVE)** - Cluster/node management
- SDK/Client: `proxmoxer` v2.2.0 (Python)
- Auth: Token-based (`root@pam!mgmt` token)
- Config: `~/.config/pve/credentials`
- Helper: `~/bin/pve` (list, status, start, stop, create-ct)
- Endpoint: https://65.108.14.165:8006 (local host core.georgsen.dk)
**Backup Management:**
- **Proxmox Backup Server (PBS)** - Centralized backup infrastructure
- API: REST over HTTPS at 10.5.0.6:8007
- Auth: Token-based (`root@pam!pve` token)
- Helper: `~/bin/pbs` (status, backups, tasks, errors, gc, snapshots, storage)
- Targets: core.georgsen.dk, pve01.warradejendomme.dk, pve02.warradejendomme.dk namespaces
- Datastore: Synology NAS via CIFS at 100.105.26.130 (Tailscale)
**DNS Management:**
- **Technitium DNS** - Internal DNS with API
- API: REST at http://10.5.0.2:5380/api/
- Auth: Username/password based
- Config: `~/.config/dns/credentials`
- Helper: `~/bin/dns` (list, records, add, delete, lookup)
- Internal zone: `lab.georgsen.dk`
- Upstream: Cloudflare (1.1.1.1), Google (8.8.8.8), Quad9 (9.9.9.9)
**Monitoring APIs:**
- **Uptime Kuma** - Status page & endpoint monitoring
- API: HTTP at 10.5.0.10:3001
- SDK/Client: `uptime-kuma-api` v1.2.1 (Python)
- Auth: Username/password login
- Config: `~/.config/uptime-kuma/credentials`
- Helper: `~/bin/kuma` (list, info, add-http, add-port, add-ping, delete, pause, resume)
- URL: https://status.georgsen.dk
- **Beszel** - Server metrics dashboard
- Backend: PocketBase REST API at 10.5.0.10:8090
- SDK/Client: `pocketbase` v0.15.0 (Python)
- Auth: Admin email/password
- Config: `~/.config/beszel/credentials`
- Helper: `~/bin/beszel` (list, status, add, delete, alerts)
- URL: https://dashboard.georgsen.dk
- Agents: core (10.5.0.254), PBS (10.5.0.6), Dockge (10.5.0.10 + Docker stats)
- Data retention: 30 days (automatic)
**Reverse Proxy & SSL:**
- **Nginx Proxy Manager (NPM)** - Reverse proxy with SSL
- API: JSON-RPC style (internal Docker API)
- Helper: `~/bin/npm-api` (--host-list, --host-create, --host-delete, --cert-list)
- Config: `~/.config/npm/npm-api.conf` (custom API wrapper)
- UI: http://10.5.0.1:81 (admin panel)
- SSL Provider: Let's Encrypt (HTTP-01 challenge)
- Access Control: NPM Access Lists (ID 1: "home_only" whitelist 83.89.248.247)
**Git/Version Control:**
- **Forgejo** - Self-hosted Git server
- API: REST at 10.5.0.14:3000/api/v1/
- Auth: API token based
- Config: `~/.config/forgejo/credentials`
- URL: https://git.georgsen.dk
- Repo: `git@10.5.0.14:mikkel/homelab.git`
- Version: v10.0.1
**Data Stores:**
- **DragonflyDB** - Redis-compatible in-memory store
- Host: 10.5.0.10 (Docker in Dockge)
- Port: 6379
- Protocol: Redis protocol
- Auth: Password protected (`nUq/IfoIQJf/kouckKHRQOk7vV0NwCuI`)
- Client: redis-cli or any Redis library
- Usage: Session/cache storage
- **PostgreSQL** - Relational database
- Host: 10.5.0.109 (VMID 103)
- Default port: 5432
- Managed by: Community (Proxmox LXC community images)
- Usage: Sentry system and other applications
## Data Storage
**Databases:**
- **PostgreSQL 13+** (VMID 103)
- Connection: `postgresql://user@10.5.0.109:5432/dbname`
- Client: psql (CLI) or any PostgreSQL driver
- Usage: Sentry defense intelligence system, application databases
- **DragonflyDB** (Redis-compatible)
- Connection: `redis://10.5.0.10:6379` (with auth)
- Client: redis-cli or Python redis library
- Backup: Enabled in Docker config, persists to `./data/`
- **Redis** (VMID 104, deprecated in favor of DragonflyDB)
- Host: 10.5.0.111
- Status: Still active but DragonflyDB preferred
**File Storage:**
- **Local Filesystem:** Each container has ZFS subvolume storage at /
- **Shared Storage (ZFS):** `/shared/mikkel/stuff` bind-mounted into containers
- PVE: `rpool/shared/mikkel` dataset
- mgmt (102): `~/stuff` with backup=1 (included in PBS backups)
- dev (111): `~/stuff` (shared access)
- general (113): `~/stuff` (shared access)
- SMB Access: `\\mgmt\stuff` via Tailscale MagicDNS
**Backup Target:**
- **Synology NAS** (home network)
- Tailscale IP: 100.105.26.130
- Mount: `/mnt/synology` on PBS
- Protocol: CIFS/SMB 3.0
- Share: `/volume1/pbs-backup`
- UID mapping: Mapped to admin (squash: map all)
## Authentication & Identity
**Auth Providers:**
- **Proxmox PAM** - System-based authentication for PVE/PBS
- Users: root@pam, other system users
- Token auth: `root@pam!mgmt` (PVE), `root@pam!pve` (PBS)
**SSH Key Authentication:**
- **Ed25519 keys** for user access
- Key: `ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIOQrK06zVkfY6C1ec69kEZYjf8tC98icCcBju4V751i mikkel@georgsen.dk`
- Deployed to all containers at `~/.ssh/authorized_keys` and `/root/.ssh/authorized_keys`
**Telegram Bot Authentication:**
- **Telegram Bot Token** - Stored in `~/telegram/credentials`
- **Authorized Users:** Whitelist stored in `~/telegram/authorized_users` (chat IDs)
- **First user:** Auto-authorized on first `/start` command
- **Two-way messaging:** Text/photos/files saved to `~/telegram/inbox`
## Monitoring & Observability
**Error Tracking:**
- **Sentry** (custom defense intelligence system, VMID 105)
- Purpose: Monitor military contracting opportunities
- Databases: PostgreSQL (103) + Redis (104)
- Not a traditional error tracker - custom business intelligence system
**Metrics & Monitoring:**
- **Beszel**: Server CPU, RAM, disk usage metrics
- **Uptime Kuma**: HTTP, TCP port, ICMP ping monitoring
- **PBS**: Backup task logs, storage metrics, dedup stats
**Logs:**
- **PBS logs:** SSH queries via `~/bin/pbs`, stored on PBS container
- **Forgejo logs:** `/var/lib/forgejo/log/forgejo.log` (for fail2ban)
- **Telegram bot logs:** stdout to systemd service `telegram-bot.service`
- **Helper scripts:** Output to stdout, can be piped/redirected
## CI/CD & Deployment
**Hosting:**
- **Hetzner** (public cloud) - Primary: core.georgsen.dk (AX52)
- **Home Infrastructure** - Synology NAS for backups, future NUC cluster
- **Docker/Dockge** - Application deployment via Docker Compose (10.5.0.10)
**CI Pipeline:**
- **None detected** - Manual deployment via Dockge or container management
- **Version control:** Forgejo (self-hosted Git server)
- **Update checks:** `~/bin/updates` script checks for updates across services
- Tracked: dragonfly, beszel, uptime-kuma, snappymail, dockge, npm, forgejo, dns, pbs
**Deployment Tools:**
- **Dockge** - Docker Compose UI for stack management
- **PVE API** - Proxmox VE for container/VM provisioning
- **Helper scripts** - `~/bin/pve create-ct` for automated container creation
## Environment Configuration
**Required Environment Variables (in credential files):**
DNS (`~/.config/dns/credentials`):
```
DNS_HOST=10.5.0.2
DNS_PORT=5380
DNS_USER=admin
DNS_PASS=<password>
```
Proxmox (`~/.config/pve/credentials`):
```
host=65.108.14.165:8006
user=root@pam
token_name=mgmt
token_value=<token>
```
Uptime Kuma (`~/.config/uptime-kuma/credentials`):
```
KUMA_HOST=10.5.0.10
KUMA_PORT=3001
KUMA_USER=admin
KUMA_PASS=<password>
```
Beszel (`~/.config/beszel/credentials`):
```
BESZEL_HOST=10.5.0.10
BESZEL_PORT=8090
BESZEL_USER=admin@example.com
BESZEL_PASS=<password>
```
Telegram (`~/telegram/credentials`):
```
TELEGRAM_BOT_TOKEN=<token>
```
## Webhooks & Callbacks
**Incoming Webhooks:**
- **Uptime Kuma** - No webhook ingestion detected
- **PBS** - Backup completion tasks (internal scheduling, no external webhooks)
- **Forgejo** - No webhook configuration documented
**Outgoing Notifications:**
- **Telegram Bot** - Two-way messaging for homelab status
- Commands: /status, /pbs, /backups, /beszel, /kuma, /ping
- File uploads: Photos saved to `~/telegram/images/`, documents to `~/telegram/files/`
- Text inbox: Messages saved to `~/telegram/inbox` for Claude review
**Event-Driven:**
- **PBS Scheduling** - Daily backup tasks at 01:00, 01:30, 02:00 (core, pve01, pve02)
- **Prune/GC** - Scheduled at 21:00 (prune) and 22:30 (garbage collection)
## VPN & Remote Access
**Tailscale Network:**
- **Primary relay:** 10.5.0.134 + 10.9.1.10 (VMID 1000, exit node capable)
- **Tailscale IPs:**
- PBS: 100.115.85.120
- Synology NAS: 100.105.26.130
- dev: 100.85.227.17
- sentry: 100.83.236.113
- Friends' nodes: pve01 (100.99.118.54), pve02 (100.82.87.108)
- Other devices: mge-t14, mikflix, xanderryzen, nvr01, tailscalemg
**SSH Access Pattern:**
- All containers/VMs accessible via SSH from mgmt (102)
- SSH keys pre-deployed to all systems
- Tailscale used for accessing from external networks
## External DNS
**DNS Provider:** dns.services (Danish free DNS with API)
- Domains managed:
- georgsen.dk
- dataloes.dk
- microsux.dk
- warradejendomme.dk
- Used for external domain registration only
- Internal zone lookups go to Technitium (10.5.0.2)
---
*Integration audit: 2026-02-04*