moai/.planning/phases/05-multi-model-discussions/05-02-PLAN.md
Mikkel Georgsen 5afd8b6213 docs(05): create phase plans for multi-model discussions
Phase 05: Multi-Model Discussions
- 4 plans created
- 11 total tasks defined
- Covers M4 (open/parallel), M5 (discuss/sequential), M8 (@mentions)
- Ready for execution

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 19:16:43 +00:00

4.5 KiB

phase plan type
05-multi-model-discussions 02 execute
Implement /open command for parallel multi-model queries (M4 milestone).

Purpose: Allow users to get parallel responses from all project models on a question. Output: Working /open command that queries all models simultaneously and displays responses.

<execution_context> ~/.claude/get-shit-done/workflows/execute-phase.md ~/.claude/get-shit-done/templates/summary.md </execution_context>

@.planning/PROJECT.md @.planning/ROADMAP.md @.planning/STATE.md

Prior plan context:

@.planning/phases/05-multi-model-discussions/05-01-SUMMARY.md

Key files:

@src/moai/core/ai_client.py @src/moai/bot/handlers/discussion.py @src/moai/bot/handlers/projects.py @src/moai/core/services/discussion.py

Tech stack available: python-telegram-bot, openai (async), sqlalchemy Established patterns: Typing indicator, command validation, service layer, AIClient.complete() Constraining decisions:

  • 04-02: Typing indicator shown while waiting for AI response
  • 04-01: OpenAI SDK for router abstraction (async calls)
Task 1: Create orchestrator module with parallel query function src/moai/core/orchestrator.py Create orchestrator.py with: - SYSTEM_PROMPT constant for roundtable discussion (from SPEC.md) - async query_models_parallel(models: list[str], question: str, project_name: str) -> dict[str, str] - Uses asyncio.gather() to call AIClient.complete() for all models simultaneously - Returns dict mapping model name → response - Handles individual model failures gracefully (returns error message for that model) - Builds system prompt with "Other participants: {models}" and "Topic: {project_name}"

Do NOT build full discussion context yet - that's for discuss mode in 05-03. Import orchestrator, verify query_models_parallel signature and SYSTEM_PROMPT exists orchestrator.py exists with SYSTEM_PROMPT and query_models_parallel function using asyncio.gather

Task 2: Implement /open command handler with database persistence src/moai/bot/handlers/discussion.py, src/moai/bot/handlers/__init__.py Add to discussion.py: - open_command(update, context) handler for "/open " - Requires selected project (error if none) - Uses project's models list (error if empty) - Creates Discussion(type=OPEN) and Round(type=PARALLEL, round_number=1) via discussion service - Calls query_models_parallel() with project models - Creates Message for each response - Formats output: "**Model:**\n> response" for each model - Shows typing indicator while waiting

Register /open handler in init.py with CommandHandler. Update HELP_TEXT in commands.py with /open usage. Run bot, use /open What is Python? with a selected project that has models configured /open queries all project models in parallel, persists to DB, displays formatted responses

Task 3: Update help text and status for multi-model support src/moai/bot/handlers/commands.py Update HELP_TEXT to add Discussion section: ``` **Discussion** /open - Ask all models (parallel) /discuss [rounds] - Start discussion (default: 3) /next - Next round manually /stop - End discussion @model - Direct message to model ```

This documents commands for the full phase even though /discuss, /next, /stop, and @mentions are implemented in later plans. Run bot, /help shows Discussion section with all commands listed HELP_TEXT includes Discussion section with all multi-model commands

Before declaring plan complete: - [ ] `python -c "from moai.core.orchestrator import query_models_parallel"` succeeds - [ ] Bot responds to /open with parallel model responses - [ ] Discussion/Round/Messages persisted to database after /open - [ ] /help shows Discussion section - [ ] Error handling for no project selected, no models configured

<success_criteria>

  • /open command works with parallel AI queries
  • Responses persisted as Discussion → Round → Messages
  • Typing indicator shown during queries
  • Proper error messages for edge cases
  • M4 milestone (Open mode parallel) complete </success_criteria>
After completion, create `.planning/phases/05-multi-model-discussions/05-02-SUMMARY.md`