Phase 05: Multi-Model Discussions - 4 plans created - 11 total tasks defined - Covers M4 (open/parallel), M5 (discuss/sequential), M8 (@mentions) - Ready for execution Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
5.4 KiB
| phase | plan | type |
|---|---|---|
| 05-multi-model-discussions | 03 | execute |
Purpose: Enable structured multi-round discussions where each model sees prior responses. Output: Working /discuss, /next, /stop commands with full conversation context passed to each model.
<execution_context> ~/.claude/get-shit-done/workflows/execute-phase.md ~/.claude/get-shit-done/templates/summary.md </execution_context>
@.planning/PROJECT.md @.planning/ROADMAP.md @.planning/STATE.mdPrior plan context:
@.planning/phases/05-multi-model-discussions/05-02-SUMMARY.md
Key files:
@src/moai/core/orchestrator.py @src/moai/core/services/discussion.py @src/moai/bot/handlers/discussion.py @SPEC.md (system prompts and discussion flow)
Tech stack available: python-telegram-bot, openai (async), sqlalchemy Established patterns: query_models_parallel, Discussion/Round/Message persistence, typing indicator Constraining decisions:
- 05-02: Orchestrator pattern established
- 04-02: Typing indicator for AI calls
- async run_discussion_round(discussion: Discussion, models: list[str], project_name: str) -> dict[str, str]
- Builds context from all prior rounds
- Calls each model SEQUENTIALLY (not parallel) so each sees previous in same round
- Returns dict mapping model → response
- Creates Round(type=SEQUENTIAL) and Messages via discussion service
Sequential means: Claude responds, then GPT sees Claude's response AND responds, then Gemini sees both. Use asyncio loop, not gather, to ensure sequential execution within round. Import orchestrator, verify build_context and run_discussion_round exist orchestrator.py has build_context and run_discussion_round with sequential model calls
Task 2: Implement /discuss command with round limit src/moai/bot/handlers/discussion.py, src/moai/bot/handlers/__init__.py Add to discussion.py: - discuss_command(update, context) handler for "/discuss [rounds]" - Requires selected project with models - Requires active discussion (from /open) or starts new one with inline question - Parses optional rounds argument (default: 3 from project settings or hardcoded) - Stores round_limit and current_round in context.user_data["discussion_state"] - Runs first round via run_discussion_round - Displays round results with "**Round N:**" header - Shows "Round 1/N complete. Use /next or /stop"Register /discuss handler. Store discussion_id in user_data for /next and /stop to reference. After /open, run /discuss 3, verify round 1 executes with sequential responses /discuss starts sequential discussion, stores state for continuation, displays formatted output
Task 3: Implement /next and /stop commands src/moai/bot/handlers/discussion.py, src/moai/bot/handlers/__init__.py Add to discussion.py: - next_command(update, context) handler for "/next" - Reads discussion_state from user_data - Error if no active discussion or round_limit reached - Increments round counter, runs run_discussion_round - If final round: auto-complete discussion, show "Discussion complete (N rounds)" - Otherwise: show "Round N/M complete. Use /next or /stop"- stop_command(update, context) handler for "/stop"
- Reads discussion_state from user_data
- Completes discussion early via complete_discussion service
- Clears discussion_state from user_data
- Shows "Discussion stopped at round N. Use /consensus to summarize."
Register both handlers in init.py. Run full flow: /open → /discuss 2 → /next → verify round 2 runs → /stop or let it complete /next advances rounds with context, /stop ends early, both clear state appropriately
Before declaring plan complete: - [ ] Full flow works: /open → /discuss 3 → /next → /next → auto-completes - [ ] /stop works mid-discussion - [ ] Each round shows sequential responses (Claude first, then GPT seeing Claude, etc.) - [ ] Round counter displays correctly (Round 1/3, Round 2/3, etc.) - [ ] Discussion marked COMPLETED when finished - [ ] Error messages for: no discussion, round limit reached<success_criteria>
- /discuss starts sequential multi-round discussion
- /next advances with full context passed to models
- /stop ends discussion early
- Models see all prior responses in context
- M5 milestone (Discuss mode sequential) complete </success_criteria>