Phase 05: Multi-Model Discussions - 4 plans created - 11 total tasks defined - Covers M4 (open/parallel), M5 (discuss/sequential), M8 (@mentions) - Ready for execution Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
4.5 KiB
| phase | plan | type |
|---|---|---|
| 05-multi-model-discussions | 02 | execute |
Purpose: Allow users to get parallel responses from all project models on a question. Output: Working /open command that queries all models simultaneously and displays responses.
<execution_context> ~/.claude/get-shit-done/workflows/execute-phase.md ~/.claude/get-shit-done/templates/summary.md </execution_context>
@.planning/PROJECT.md @.planning/ROADMAP.md @.planning/STATE.mdPrior plan context:
@.planning/phases/05-multi-model-discussions/05-01-SUMMARY.md
Key files:
@src/moai/core/ai_client.py @src/moai/bot/handlers/discussion.py @src/moai/bot/handlers/projects.py @src/moai/core/services/discussion.py
Tech stack available: python-telegram-bot, openai (async), sqlalchemy Established patterns: Typing indicator, command validation, service layer, AIClient.complete() Constraining decisions:
- 04-02: Typing indicator shown while waiting for AI response
- 04-01: OpenAI SDK for router abstraction (async calls)
Do NOT build full discussion context yet - that's for discuss mode in 05-03. Import orchestrator, verify query_models_parallel signature and SYSTEM_PROMPT exists orchestrator.py exists with SYSTEM_PROMPT and query_models_parallel function using asyncio.gather
Task 2: Implement /open command handler with database persistence src/moai/bot/handlers/discussion.py, src/moai/bot/handlers/__init__.py Add to discussion.py: - open_command(update, context) handler for "/open " - Requires selected project (error if none) - Uses project's models list (error if empty) - Creates Discussion(type=OPEN) and Round(type=PARALLEL, round_number=1) via discussion service - Calls query_models_parallel() with project models - Creates Message for each response - Formats output: "**Model:**\n> response" for each model - Shows typing indicator while waitingRegister /open handler in init.py with CommandHandler. Update HELP_TEXT in commands.py with /open usage.
Run bot, use /open What is Python? with a selected project that has models configured
/open queries all project models in parallel, persists to DB, displays formatted responses
This documents commands for the full phase even though /discuss, /next, /stop, and @mentions are implemented in later plans. Run bot, /help shows Discussion section with all commands listed HELP_TEXT includes Discussion section with all multi-model commands
Before declaring plan complete: - [ ] `python -c "from moai.core.orchestrator import query_models_parallel"` succeeds - [ ] Bot responds to /open with parallel model responses - [ ] Discussion/Round/Messages persisted to database after /open - [ ] /help shows Discussion section - [ ] Error handling for no project selected, no models configured<success_criteria>
- /open command works with parallel AI queries
- Responses persisted as Discussion → Round → Messages
- Typing indicator shown during queries
- Proper error messages for edge cases
- M4 milestone (Open mode parallel) complete </success_criteria>