Codebase: 7 documents (stack, architecture, structure, conventions, testing, integrations, concerns) Research: 5 documents (stack, features, architecture, pitfalls, summary)
8.9 KiB
Testing Patterns
Analysis Date: 2026-02-04
Test Framework
Current State:
- No automated testing detected in this codebase
- No test files found (no
*.test.py,*_test.py,*.spec.pyfiles) - No testing configuration files (no
pytest.ini,tox.ini,setup.cfg) - No test dependencies in requirements (no pytest, unittest, mock imports)
Implications: This is a scripts-only codebase - all code consists of CLI helper scripts and one bot automation. Manual testing is the primary validation method.
Script Testing Approach
Since this codebase consists entirely of helper scripts and automation, testing is manual and implicit:
Command-Line Validation:
- Each script has a usage/help message showing all commands
- Example from
pve:if len(sys.argv) < 2: print(__doc__) sys.exit(1) - Example from
telegram:case "${1:-}" in send) cmd_send "$2" ;; inbox) cmd_inbox ;; *) usage; exit 1 ;; esac
Entry Point Testing: Main execution guards are used throughout:
if __name__ == "__main__":
main()
This allows scripts to be imported (theoretically) without side effects, though in practice they are not used as modules.
API Integration Testing
Pattern: Try-Except Fallback: Many scripts handle multiple service types by trying different approaches:
From pve script (lines 55-85):
def get_status(vmid):
"""Get detailed status of a VM/container."""
vmid = int(vmid)
# Try as container first
try:
status = pve.nodes(NODE).lxc(vmid).status.current.get()
# ... container-specific logic
return
except:
pass
# Try as VM
try:
status = pve.nodes(NODE).qemu(vmid).status.current.get()
# ... VM-specific logic
return
except:
pass
print(f"VMID {vmid} not found")
This is a pragmatic testing pattern: if one API call fails, try another. Useful for development but fragile without structured error handling.
Command Dispatch Testing
Pattern: Argument Validation: All scripts validate argument count before executing commands:
From beszel script (lines 101-124):
if __name__ == "__main__":
if len(sys.argv) < 2:
usage()
cmd = sys.argv[1]
try:
if cmd == "list":
cmd_list()
elif cmd == "info" and len(sys.argv) == 3:
cmd_info(sys.argv[2])
elif cmd == "add" and len(sys.argv) >= 4:
# ...
else:
usage()
except Exception as e:
print(f"Error: {e}")
sys.exit(1)
This catches typos in command names and wrong argument counts, showing usage help.
Data Processing Testing
Bash String Parsing:
Complex regex patterns used in pbs script require careful testing:
From pbs (lines 122-143):
ssh_pbs 'tail -500 /var/log/proxmox-backup/tasks/archive 2>/dev/null' | while IFS= read -r line; do
if [[ "$line" =~ UPID:pbs:[^:]+:[^:]+:[^:]+:([0-9A-Fa-f]+):([^:]+):([^:]+):.*\ [0-9A-Fa-f]+\ (OK|ERROR|WARNINGS[^$]*) ]]; then
task_time=$((16#${BASH_REMATCH[1]}))
task_type="${BASH_REMATCH[2]}"
task_target="${BASH_REMATCH[3]}"
status="${BASH_REMATCH[4]}"
# ... process matched groups
fi
done
Manual Testing Approach:
- Run command against live services
- Inspect output format visually
- Verify JSON parsing with inline Python:
echo "$gc_json" | python3 -c "import sys,json; d=json.load(sys.stdin); print(d.get('disk-bytes',0))"
Mock Testing Pattern (Telegram Bot)
The telegram bot has one pattern that resembles mocking - subprocess mocking via run_command():
From telegram/bot.py (lines 60-78):
def run_command(cmd: list, timeout: int = 30) -> str:
"""Run a shell command and return output."""
try:
result = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=timeout,
env={**os.environ, 'PATH': f"/home/mikkel/bin:{os.environ.get('PATH', '')}"}
)
output = result.stdout or result.stderr or "No output"
# Telegram has 4096 char limit per message
if len(output) > 4000:
output = output[:4000] + "\n... (truncated)"
return output
except subprocess.TimeoutExpired:
return "Command timed out"
except Exception as e:
return f"Error: {e}"
This function:
- Runs external commands with timeout protection
- Handles both stdout and stderr
- Truncates output for Telegram's message size limits
- Returns error messages instead of raising exceptions
This enables testing command handlers by mocking which commands are available.
Timeout Testing
The telegram bot handles timeouts explicitly:
From telegram/bot.py:
result = subprocess.run(
["ping", "-c", "3", "-W", "2", host],
capture_output=True,
text=True,
timeout=10 # 10 second timeout
)
Different commands have different timeouts:
ping_host(): 10 second timeoutrun_command(): 30 second default (configurable)backups(): 60 second timeout (passed to run_command)
This prevents the bot from hanging on slow/unresponsive services.
Error Message Testing
Scripts validate successful API responses:
From dns script (lines 62-69):
curl -s "$BASE/zones/records/add?..." | python3 -c "
import sys, json
data = json.load(sys.stdin)
if data['status'] == 'ok':
print(f\"Added: {data['response']['addedRecord']['name']} -> ...\")
else:
print(f\"Error: {data.get('errorMessage', 'Unknown error')}\")
"
This pattern:
- Parses JSON response
- Checks status field
- Returns user-friendly error message on failure
Credential Testing
Scripts assume credentials exist and are properly formatted:
From pve (lines 17-34):
creds_path = Path.home() / ".config" / "pve" / "credentials"
creds = {}
with open(creds_path) as f:
for line in f:
if "=" in line:
key, value = line.strip().split("=", 1)
creds[key] = value
pve = ProxmoxAPI(
creds["host"],
user=creds["user"],
token_name=creds["token_name"],
token_value=creds["token_value"],
verify_ssl=False
)
Missing Error Handling:
- No check that credentials file exists
- No check that required keys are present
- No validation that API connection succeeds
- Will crash with KeyError or FileNotFoundError if file missing
Recommendation for Testing: Add pre-flight validation:
required_keys = ["host", "user", "token_name", "token_value"]
missing = [k for k in required_keys if k not in creds]
if missing:
print(f"Error: Missing credentials: {', '.join(missing)}")
sys.exit(1)
File I/O Testing
Telegram bot handles file operations defensively:
From telegram/bot.py (lines 277-286):
# Create images directory
images_dir = Path(__file__).parent / 'images'
images_dir.mkdir(exist_ok=True)
# Get the largest photo (best quality)
photo = update.message.photo[-1]
file = await context.bot.get_file(photo.file_id)
# Download the image
filename = f"{file_timestamp}.jpg"
filepath = images_dir / filename
await file.download_to_drive(filepath)
Patterns:
mkdir(exist_ok=True): Safely creates directory, doesn't error if exists- Timestamp-based filenames to avoid collisions:
f"{file_timestamp}_{original_name}" - Pathlib for cross-platform path handling
What to Test If Writing Tests
If converting to automated tests, prioritize:
High Priority:
-
Telegram bot command dispatch (
telegram/bot.pylines 107-366)- Each command handler should have unit tests
- Mock
subprocess.run()to avoid calling actual commands - Test authorization checks (
is_authorized()) - Test output truncation for large responses
-
Credential loading (all helper scripts)
- Test missing credentials file error
- Test malformed credentials
- Test missing required keys
-
API response parsing (
dns,pbs,beszel,kuma)- Test JSON parsing errors
- Test malformed responses
- Test status code handling
Medium Priority:
-
Bash regex parsing (
pbstask/error log parsing)- Test hex timestamp conversion
- Test status code extraction
- Test task target parsing with special characters
-
Timeout handling (all
run_command()calls)- Test command timeout
- Test output truncation
- Test error message formatting
Low Priority:
- Integration tests with real services (kept in separate test suite)
- Performance tests for large data sets
Current Test Coverage
Implicit Testing:
- Manual CLI testing during development
- Live service testing (commands run against real PVE, PBS, DNS, etc.)
- User/admin interaction testing (Telegram bot testing via /start, /status, etc.)
Gap:
- No regression testing
- No automated validation of API response formats
- No error case testing
- No refactoring safety net
Testing analysis: 2026-02-04