Archived: - milestones/v1.0-ROADMAP.md - milestones/v1.0-REQUIREMENTS.md - milestones/v1.0-MILESTONE-AUDIT.md Deleted (fresh for next milestone): - ROADMAP.md Updated: - MILESTONES.md (new entry) - PROJECT.md (requirements → Validated, decisions with outcomes) - STATE.md (reset for next milestone) v1.0 MVP shipped: - 6 phases, 16 plans - 2,732 lines Python - All 9 requirements validated Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
82 lines
4 KiB
Markdown
82 lines
4 KiB
Markdown
# MoAI - Master of AIs
|
|
|
|
## What This Is
|
|
|
|
A multi-AI collaborative brainstorming Telegram bot where multiple AI models (Claude, GPT, Gemini) discuss topics together, see each other's responses, and work toward consensus. Users can run parallel queries, conduct sequential discussions, @mention specific models, and export conversations with AI-generated consensus summaries.
|
|
|
|
## Core Value
|
|
|
|
Get richer, more diverse AI insights through structured multi-model discussions—ask a team of AIs instead of just one.
|
|
|
|
## Requirements
|
|
|
|
### Validated
|
|
|
|
- ✓ Project scaffolding (pyproject.toml, ruff, pre-commit, src layout) — v1.0
|
|
- ✓ M1: Bot responds to /help, /status — v1.0
|
|
- ✓ M2: Project CRUD (/projects, /project new, select, delete, models, info) — v1.0
|
|
- ✓ M3: Single model Q&A working (/ask command) — v1.0
|
|
- ✓ M4: Open mode (parallel) with multiple models — v1.0
|
|
- ✓ M5: Discuss mode (sequential rounds) — v1.0
|
|
- ✓ M6: Consensus generation (/consensus) — v1.0
|
|
- ✓ M7: Export to markdown (/export) — v1.0
|
|
- ✓ M8: @mention direct messages — v1.0
|
|
|
|
### Active
|
|
|
|
(None — define requirements for next milestone)
|
|
|
|
### Out of Scope
|
|
|
|
- Web UI — Phase 2, after Telegram POC is validated
|
|
- Multi-user collaboration — Phase 3 future
|
|
- Personas (optimist/critic/pragmatist modes) — future enhancement
|
|
- Voting/tallying — future enhancement
|
|
- Cross-project memory — future enhancement
|
|
- Automated triggers/webhooks — future enhancement
|
|
- Voice memo transcription — future enhancement
|
|
|
|
## Context
|
|
|
|
**Current state:** v1.0 shipped. Telegram bot fully functional with 2,732 LOC Python.
|
|
|
|
**Tech stack:** Python 3.11+, python-telegram-bot, SQLAlchemy + SQLite, OpenAI SDK (Requesty/OpenRouter compatible), pytest, ruff.
|
|
|
|
**Architecture:** Telegram command → Handler → Service → Orchestrator → AI Client → Requesty/OpenRouter → AI APIs
|
|
|
|
**Known tech debt (from v1.0 audit):**
|
|
- Missing test coverage for database error paths
|
|
- Error handling enhancement deferred (comprehensive retry/timeout)
|
|
- Allowed users middleware not enforced (defined but unchecked)
|
|
- Minor code patterns: orphaned export, inconsistent get_selected_project, missing re-exports
|
|
|
|
## Constraints
|
|
|
|
- **Python version**: 3.11+ — required for modern async patterns
|
|
- **Bot framework**: python-telegram-bot (async) — spec requirement
|
|
- **Database**: SQLAlchemy + SQLite — upgrades to PostgreSQL in Phase 2
|
|
- **AI routing**: Modular abstraction layer — Requesty first, support OpenRouter and others
|
|
- **Linting**: ruff (line length 100) — enforced via pre-commit
|
|
- **Testing**: pytest, 80%+ coverage on core logic
|
|
- **Type hints**: Required on all public functions
|
|
- **Docstrings**: Required on modules and classes
|
|
- **Logging**: logging module only, no print()
|
|
- **Dependencies**: Unpinned unless security requires it
|
|
|
|
## Key Decisions
|
|
|
|
| Decision | Rationale | Outcome |
|
|
|----------|-----------|---------|
|
|
| AI client as abstraction layer | Support Requesty, OpenRouter, direct APIs without changing core code | ✓ Good — OpenAI SDK works with both routers |
|
|
| Full project scaffolding first | Consistent tooling from day one; prevents tech debt | ✓ Good — ruff/pre-commit caught issues early |
|
|
| User allowlist auth (Phase 1) | Simple for single-user POC | ⚠️ Revisit — middleware defined but not enforced |
|
|
| hatchling as build backend | Explicit src layout config | ✓ Good |
|
|
| String(36) for UUID storage | SQLite compatibility | ✓ Good |
|
|
| Module-level singletons | Simple pattern for AI client and database | ✓ Good — consistent access everywhere |
|
|
| asyncio.gather for parallel | Concurrent model queries in /open | ✓ Good — graceful per-model error handling |
|
|
| user_data for state | Store discussion context across commands | ✓ Good — clean multi-command flows |
|
|
| JSON format for consensus | Structured AI output parsing | ✓ Good — reliable extraction |
|
|
| BytesIO for export | In-memory file generation | ✓ Good — no temp files needed |
|
|
|
|
---
|
|
*Last updated: 2026-01-17 after v1.0 milestone*
|