Phase 05: Multi-Model Discussions - 4 plans created - 11 total tasks defined - Covers M4 (open/parallel), M5 (discuss/sequential), M8 (@mentions) - Ready for execution Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
127 lines
5.4 KiB
Markdown
127 lines
5.4 KiB
Markdown
---
|
|
phase: 05-multi-model-discussions
|
|
plan: 03
|
|
type: execute
|
|
---
|
|
|
|
<objective>
|
|
Implement /discuss mode with sequential rounds, context building, and /next, /stop commands (M5 milestone).
|
|
|
|
Purpose: Enable structured multi-round discussions where each model sees prior responses.
|
|
Output: Working /discuss, /next, /stop commands with full conversation context passed to each model.
|
|
</objective>
|
|
|
|
<execution_context>
|
|
~/.claude/get-shit-done/workflows/execute-phase.md
|
|
~/.claude/get-shit-done/templates/summary.md
|
|
</execution_context>
|
|
|
|
<context>
|
|
@.planning/PROJECT.md
|
|
@.planning/ROADMAP.md
|
|
@.planning/STATE.md
|
|
|
|
# Prior plan context:
|
|
@.planning/phases/05-multi-model-discussions/05-02-SUMMARY.md
|
|
|
|
# Key files:
|
|
@src/moai/core/orchestrator.py
|
|
@src/moai/core/services/discussion.py
|
|
@src/moai/bot/handlers/discussion.py
|
|
@SPEC.md (system prompts and discussion flow)
|
|
|
|
**Tech stack available:** python-telegram-bot, openai (async), sqlalchemy
|
|
**Established patterns:** query_models_parallel, Discussion/Round/Message persistence, typing indicator
|
|
**Constraining decisions:**
|
|
- 05-02: Orchestrator pattern established
|
|
- 04-02: Typing indicator for AI calls
|
|
</context>
|
|
|
|
<tasks>
|
|
|
|
<task type="auto">
|
|
<name>Task 1: Add context building and sequential round execution to orchestrator</name>
|
|
<files>src/moai/core/orchestrator.py</files>
|
|
<action>Add to orchestrator.py:
|
|
- build_context(discussion: Discussion) -> list[dict]
|
|
- Converts all rounds/messages to OpenAI message format
|
|
- Returns list of {"role": "assistant"/"user", "content": "**Model:** response"}
|
|
- Models see their own responses as assistant, others' as user (simplified: all prior as user context)
|
|
- Include original question as first user message
|
|
|
|
- async run_discussion_round(discussion: Discussion, models: list[str], project_name: str) -> dict[str, str]
|
|
- Builds context from all prior rounds
|
|
- Calls each model SEQUENTIALLY (not parallel) so each sees previous in same round
|
|
- Returns dict mapping model → response
|
|
- Creates Round(type=SEQUENTIAL) and Messages via discussion service
|
|
|
|
Sequential means: Claude responds, then GPT sees Claude's response AND responds, then Gemini sees both.
|
|
Use asyncio loop, not gather, to ensure sequential execution within round.</action>
|
|
<verify>Import orchestrator, verify build_context and run_discussion_round exist</verify>
|
|
<done>orchestrator.py has build_context and run_discussion_round with sequential model calls</done>
|
|
</task>
|
|
|
|
<task type="auto">
|
|
<name>Task 2: Implement /discuss command with round limit</name>
|
|
<files>src/moai/bot/handlers/discussion.py, src/moai/bot/handlers/__init__.py</files>
|
|
<action>Add to discussion.py:
|
|
- discuss_command(update, context) handler for "/discuss [rounds]"
|
|
- Requires selected project with models
|
|
- Requires active discussion (from /open) or starts new one with inline question
|
|
- Parses optional rounds argument (default: 3 from project settings or hardcoded)
|
|
- Stores round_limit and current_round in context.user_data["discussion_state"]
|
|
- Runs first round via run_discussion_round
|
|
- Displays round results with "**Round N:**" header
|
|
- Shows "Round 1/N complete. Use /next or /stop"
|
|
|
|
Register /discuss handler. Store discussion_id in user_data for /next and /stop to reference.</action>
|
|
<verify>After /open, run /discuss 3, verify round 1 executes with sequential responses</verify>
|
|
<done>/discuss starts sequential discussion, stores state for continuation, displays formatted output</done>
|
|
</task>
|
|
|
|
<task type="auto">
|
|
<name>Task 3: Implement /next and /stop commands</name>
|
|
<files>src/moai/bot/handlers/discussion.py, src/moai/bot/handlers/__init__.py</files>
|
|
<action>Add to discussion.py:
|
|
- next_command(update, context) handler for "/next"
|
|
- Reads discussion_state from user_data
|
|
- Error if no active discussion or round_limit reached
|
|
- Increments round counter, runs run_discussion_round
|
|
- If final round: auto-complete discussion, show "Discussion complete (N rounds)"
|
|
- Otherwise: show "Round N/M complete. Use /next or /stop"
|
|
|
|
- stop_command(update, context) handler for "/stop"
|
|
- Reads discussion_state from user_data
|
|
- Completes discussion early via complete_discussion service
|
|
- Clears discussion_state from user_data
|
|
- Shows "Discussion stopped at round N. Use /consensus to summarize."
|
|
|
|
Register both handlers in __init__.py.</action>
|
|
<verify>Run full flow: /open → /discuss 2 → /next → verify round 2 runs → /stop or let it complete</verify>
|
|
<done>/next advances rounds with context, /stop ends early, both clear state appropriately</done>
|
|
</task>
|
|
|
|
</tasks>
|
|
|
|
<verification>
|
|
Before declaring plan complete:
|
|
- [ ] Full flow works: /open → /discuss 3 → /next → /next → auto-completes
|
|
- [ ] /stop works mid-discussion
|
|
- [ ] Each round shows sequential responses (Claude first, then GPT seeing Claude, etc.)
|
|
- [ ] Round counter displays correctly (Round 1/3, Round 2/3, etc.)
|
|
- [ ] Discussion marked COMPLETED when finished
|
|
- [ ] Error messages for: no discussion, round limit reached
|
|
</verification>
|
|
|
|
<success_criteria>
|
|
|
|
- /discuss starts sequential multi-round discussion
|
|
- /next advances with full context passed to models
|
|
- /stop ends discussion early
|
|
- Models see all prior responses in context
|
|
- M5 milestone (Discuss mode sequential) complete
|
|
</success_criteria>
|
|
|
|
<output>
|
|
After completion, create `.planning/phases/05-multi-model-discussions/05-03-SUMMARY.md`
|
|
</output>
|