docs(05): create phase plans for multi-model discussions
Phase 05: Multi-Model Discussions - 4 plans created - 11 total tasks defined - Covers M4 (open/parallel), M5 (discuss/sequential), M8 (@mentions) - Ready for execution Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
parent
71eec42baf
commit
5afd8b6213
4 changed files with 455 additions and 0 deletions
91
.planning/phases/05-multi-model-discussions/05-01-PLAN.md
Normal file
91
.planning/phases/05-multi-model-discussions/05-01-PLAN.md
Normal file
|
|
@ -0,0 +1,91 @@
|
|||
---
|
||||
phase: 05-multi-model-discussions
|
||||
plan: 01
|
||||
type: execute
|
||||
---
|
||||
|
||||
<objective>
|
||||
Create discussion service layer with CRUD operations for Discussion, Round, and Message entities.
|
||||
|
||||
Purpose: Establish the data layer that all multi-model discussion commands depend on.
|
||||
Output: Working discussion service with create/get/list operations for discussions, rounds, and messages.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
~/.claude/get-shit-done/workflows/execute-phase.md
|
||||
~/.claude/get-shit-done/templates/summary.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/STATE.md
|
||||
|
||||
# Prior phase context:
|
||||
@.planning/phases/04-single-model-qa/04-02-SUMMARY.md
|
||||
|
||||
# Key files:
|
||||
@src/moai/core/models.py
|
||||
@src/moai/core/services/project.py
|
||||
@src/moai/core/database.py
|
||||
|
||||
**Tech stack available:** sqlalchemy, aiosqlite, python-telegram-bot
|
||||
**Established patterns:** Service layer pattern (core/services/), async context manager for sessions, module-level singleton
|
||||
**Constraining decisions:**
|
||||
- 03-01: Service layer pattern for database operations
|
||||
- 01-03: expire_on_commit=False for async session usability
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: Create discussion service with CRUD operations</name>
|
||||
<files>src/moai/core/services/discussion.py, src/moai/core/services/__init__.py</files>
|
||||
<action>Create discussion.py service following the project.py pattern. Include:
|
||||
- create_discussion(project_id, question, discussion_type) - creates Discussion with DiscussionType enum
|
||||
- get_discussion(discussion_id) - returns Discussion with eager-loaded rounds/messages
|
||||
- get_active_discussion(project_id) - returns active discussion for project (status=ACTIVE), or None
|
||||
- list_discussions(project_id) - returns all discussions for a project
|
||||
- complete_discussion(discussion_id) - sets status to COMPLETED
|
||||
|
||||
Use selectinload for eager loading rounds→messages to avoid N+1 queries. Follow existing async context manager pattern from project.py.</action>
|
||||
<verify>Import service in Python REPL, verify functions exist and type hints correct</verify>
|
||||
<done>discussion.py exists with 5 async functions, proper type hints, uses selectinload for relationships</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: Add round and message operations to discussion service</name>
|
||||
<files>src/moai/core/services/discussion.py</files>
|
||||
<action>Add to discussion.py:
|
||||
- create_round(discussion_id, round_number, round_type) - creates Round with RoundType enum
|
||||
- get_current_round(discussion_id) - returns highest round_number Round for discussion
|
||||
- create_message(round_id, model, content, is_direct=False) - creates Message
|
||||
- get_round_messages(round_id) - returns messages for a round ordered by timestamp
|
||||
|
||||
All functions follow same async context manager pattern. Use proper enum imports from models.py.</action>
|
||||
<verify>Import service, verify all 4 new functions exist with correct signatures</verify>
|
||||
<done>discussion.py has 9 total functions (5 discussion + 4 round/message), all async with proper types</done>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
Before declaring plan complete:
|
||||
- [ ] `python -c "from moai.core.services.discussion import *"` succeeds
|
||||
- [ ] All 9 functions have async def signatures
|
||||
- [ ] Type hints include Discussion, Round, Message, DiscussionType, RoundType
|
||||
- [ ] No import errors when running bot
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
|
||||
- Discussion service exists at src/moai/core/services/discussion.py
|
||||
- 9 async functions for discussion/round/message CRUD
|
||||
- Follows established service layer pattern
|
||||
- Eager loading prevents N+1 queries
|
||||
- No TypeScript/import errors
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/05-multi-model-discussions/05-01-SUMMARY.md`
|
||||
</output>
|
||||
116
.planning/phases/05-multi-model-discussions/05-02-PLAN.md
Normal file
116
.planning/phases/05-multi-model-discussions/05-02-PLAN.md
Normal file
|
|
@ -0,0 +1,116 @@
|
|||
---
|
||||
phase: 05-multi-model-discussions
|
||||
plan: 02
|
||||
type: execute
|
||||
---
|
||||
|
||||
<objective>
|
||||
Implement /open command for parallel multi-model queries (M4 milestone).
|
||||
|
||||
Purpose: Allow users to get parallel responses from all project models on a question.
|
||||
Output: Working /open command that queries all models simultaneously and displays responses.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
~/.claude/get-shit-done/workflows/execute-phase.md
|
||||
~/.claude/get-shit-done/templates/summary.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/STATE.md
|
||||
|
||||
# Prior plan context:
|
||||
@.planning/phases/05-multi-model-discussions/05-01-SUMMARY.md
|
||||
|
||||
# Key files:
|
||||
@src/moai/core/ai_client.py
|
||||
@src/moai/bot/handlers/discussion.py
|
||||
@src/moai/bot/handlers/projects.py
|
||||
@src/moai/core/services/discussion.py
|
||||
|
||||
**Tech stack available:** python-telegram-bot, openai (async), sqlalchemy
|
||||
**Established patterns:** Typing indicator, command validation, service layer, AIClient.complete()
|
||||
**Constraining decisions:**
|
||||
- 04-02: Typing indicator shown while waiting for AI response
|
||||
- 04-01: OpenAI SDK for router abstraction (async calls)
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: Create orchestrator module with parallel query function</name>
|
||||
<files>src/moai/core/orchestrator.py</files>
|
||||
<action>Create orchestrator.py with:
|
||||
- SYSTEM_PROMPT constant for roundtable discussion (from SPEC.md)
|
||||
- async query_models_parallel(models: list[str], question: str, project_name: str) -> dict[str, str]
|
||||
- Uses asyncio.gather() to call AIClient.complete() for all models simultaneously
|
||||
- Returns dict mapping model name → response
|
||||
- Handles individual model failures gracefully (returns error message for that model)
|
||||
- Builds system prompt with "Other participants: {models}" and "Topic: {project_name}"
|
||||
|
||||
Do NOT build full discussion context yet - that's for discuss mode in 05-03.</action>
|
||||
<verify>Import orchestrator, verify query_models_parallel signature and SYSTEM_PROMPT exists</verify>
|
||||
<done>orchestrator.py exists with SYSTEM_PROMPT and query_models_parallel function using asyncio.gather</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: Implement /open command handler with database persistence</name>
|
||||
<files>src/moai/bot/handlers/discussion.py, src/moai/bot/handlers/__init__.py</files>
|
||||
<action>Add to discussion.py:
|
||||
- open_command(update, context) handler for "/open <question>"
|
||||
- Requires selected project (error if none)
|
||||
- Uses project's models list (error if empty)
|
||||
- Creates Discussion(type=OPEN) and Round(type=PARALLEL, round_number=1) via discussion service
|
||||
- Calls query_models_parallel() with project models
|
||||
- Creates Message for each response
|
||||
- Formats output: "**Model:**\n> response" for each model
|
||||
- Shows typing indicator while waiting
|
||||
|
||||
Register /open handler in __init__.py with CommandHandler. Update HELP_TEXT in commands.py with /open usage.</action>
|
||||
<verify>Run bot, use `/open What is Python?` with a selected project that has models configured</verify>
|
||||
<done>/open queries all project models in parallel, persists to DB, displays formatted responses</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 3: Update help text and status for multi-model support</name>
|
||||
<files>src/moai/bot/handlers/commands.py</files>
|
||||
<action>Update HELP_TEXT to add Discussion section:
|
||||
```
|
||||
**Discussion**
|
||||
/open <question> - Ask all models (parallel)
|
||||
/discuss [rounds] - Start discussion (default: 3)
|
||||
/next - Next round manually
|
||||
/stop - End discussion
|
||||
@model <message> - Direct message to model
|
||||
```
|
||||
|
||||
This documents commands for the full phase even though /discuss, /next, /stop, and @mentions are implemented in later plans.</action>
|
||||
<verify>Run bot, /help shows Discussion section with all commands listed</verify>
|
||||
<done>HELP_TEXT includes Discussion section with all multi-model commands</done>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
Before declaring plan complete:
|
||||
- [ ] `python -c "from moai.core.orchestrator import query_models_parallel"` succeeds
|
||||
- [ ] Bot responds to /open with parallel model responses
|
||||
- [ ] Discussion/Round/Messages persisted to database after /open
|
||||
- [ ] /help shows Discussion section
|
||||
- [ ] Error handling for no project selected, no models configured
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
|
||||
- /open command works with parallel AI queries
|
||||
- Responses persisted as Discussion → Round → Messages
|
||||
- Typing indicator shown during queries
|
||||
- Proper error messages for edge cases
|
||||
- M4 milestone (Open mode parallel) complete
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/05-multi-model-discussions/05-02-SUMMARY.md`
|
||||
</output>
|
||||
127
.planning/phases/05-multi-model-discussions/05-03-PLAN.md
Normal file
127
.planning/phases/05-multi-model-discussions/05-03-PLAN.md
Normal file
|
|
@ -0,0 +1,127 @@
|
|||
---
|
||||
phase: 05-multi-model-discussions
|
||||
plan: 03
|
||||
type: execute
|
||||
---
|
||||
|
||||
<objective>
|
||||
Implement /discuss mode with sequential rounds, context building, and /next, /stop commands (M5 milestone).
|
||||
|
||||
Purpose: Enable structured multi-round discussions where each model sees prior responses.
|
||||
Output: Working /discuss, /next, /stop commands with full conversation context passed to each model.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
~/.claude/get-shit-done/workflows/execute-phase.md
|
||||
~/.claude/get-shit-done/templates/summary.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/STATE.md
|
||||
|
||||
# Prior plan context:
|
||||
@.planning/phases/05-multi-model-discussions/05-02-SUMMARY.md
|
||||
|
||||
# Key files:
|
||||
@src/moai/core/orchestrator.py
|
||||
@src/moai/core/services/discussion.py
|
||||
@src/moai/bot/handlers/discussion.py
|
||||
@SPEC.md (system prompts and discussion flow)
|
||||
|
||||
**Tech stack available:** python-telegram-bot, openai (async), sqlalchemy
|
||||
**Established patterns:** query_models_parallel, Discussion/Round/Message persistence, typing indicator
|
||||
**Constraining decisions:**
|
||||
- 05-02: Orchestrator pattern established
|
||||
- 04-02: Typing indicator for AI calls
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: Add context building and sequential round execution to orchestrator</name>
|
||||
<files>src/moai/core/orchestrator.py</files>
|
||||
<action>Add to orchestrator.py:
|
||||
- build_context(discussion: Discussion) -> list[dict]
|
||||
- Converts all rounds/messages to OpenAI message format
|
||||
- Returns list of {"role": "assistant"/"user", "content": "**Model:** response"}
|
||||
- Models see their own responses as assistant, others' as user (simplified: all prior as user context)
|
||||
- Include original question as first user message
|
||||
|
||||
- async run_discussion_round(discussion: Discussion, models: list[str], project_name: str) -> dict[str, str]
|
||||
- Builds context from all prior rounds
|
||||
- Calls each model SEQUENTIALLY (not parallel) so each sees previous in same round
|
||||
- Returns dict mapping model → response
|
||||
- Creates Round(type=SEQUENTIAL) and Messages via discussion service
|
||||
|
||||
Sequential means: Claude responds, then GPT sees Claude's response AND responds, then Gemini sees both.
|
||||
Use asyncio loop, not gather, to ensure sequential execution within round.</action>
|
||||
<verify>Import orchestrator, verify build_context and run_discussion_round exist</verify>
|
||||
<done>orchestrator.py has build_context and run_discussion_round with sequential model calls</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: Implement /discuss command with round limit</name>
|
||||
<files>src/moai/bot/handlers/discussion.py, src/moai/bot/handlers/__init__.py</files>
|
||||
<action>Add to discussion.py:
|
||||
- discuss_command(update, context) handler for "/discuss [rounds]"
|
||||
- Requires selected project with models
|
||||
- Requires active discussion (from /open) or starts new one with inline question
|
||||
- Parses optional rounds argument (default: 3 from project settings or hardcoded)
|
||||
- Stores round_limit and current_round in context.user_data["discussion_state"]
|
||||
- Runs first round via run_discussion_round
|
||||
- Displays round results with "**Round N:**" header
|
||||
- Shows "Round 1/N complete. Use /next or /stop"
|
||||
|
||||
Register /discuss handler. Store discussion_id in user_data for /next and /stop to reference.</action>
|
||||
<verify>After /open, run /discuss 3, verify round 1 executes with sequential responses</verify>
|
||||
<done>/discuss starts sequential discussion, stores state for continuation, displays formatted output</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 3: Implement /next and /stop commands</name>
|
||||
<files>src/moai/bot/handlers/discussion.py, src/moai/bot/handlers/__init__.py</files>
|
||||
<action>Add to discussion.py:
|
||||
- next_command(update, context) handler for "/next"
|
||||
- Reads discussion_state from user_data
|
||||
- Error if no active discussion or round_limit reached
|
||||
- Increments round counter, runs run_discussion_round
|
||||
- If final round: auto-complete discussion, show "Discussion complete (N rounds)"
|
||||
- Otherwise: show "Round N/M complete. Use /next or /stop"
|
||||
|
||||
- stop_command(update, context) handler for "/stop"
|
||||
- Reads discussion_state from user_data
|
||||
- Completes discussion early via complete_discussion service
|
||||
- Clears discussion_state from user_data
|
||||
- Shows "Discussion stopped at round N. Use /consensus to summarize."
|
||||
|
||||
Register both handlers in __init__.py.</action>
|
||||
<verify>Run full flow: /open → /discuss 2 → /next → verify round 2 runs → /stop or let it complete</verify>
|
||||
<done>/next advances rounds with context, /stop ends early, both clear state appropriately</done>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
Before declaring plan complete:
|
||||
- [ ] Full flow works: /open → /discuss 3 → /next → /next → auto-completes
|
||||
- [ ] /stop works mid-discussion
|
||||
- [ ] Each round shows sequential responses (Claude first, then GPT seeing Claude, etc.)
|
||||
- [ ] Round counter displays correctly (Round 1/3, Round 2/3, etc.)
|
||||
- [ ] Discussion marked COMPLETED when finished
|
||||
- [ ] Error messages for: no discussion, round limit reached
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
|
||||
- /discuss starts sequential multi-round discussion
|
||||
- /next advances with full context passed to models
|
||||
- /stop ends discussion early
|
||||
- Models see all prior responses in context
|
||||
- M5 milestone (Discuss mode sequential) complete
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/05-multi-model-discussions/05-03-SUMMARY.md`
|
||||
</output>
|
||||
121
.planning/phases/05-multi-model-discussions/05-04-PLAN.md
Normal file
121
.planning/phases/05-multi-model-discussions/05-04-PLAN.md
Normal file
|
|
@ -0,0 +1,121 @@
|
|||
---
|
||||
phase: 05-multi-model-discussions
|
||||
plan: 04
|
||||
type: execute
|
||||
---
|
||||
|
||||
<objective>
|
||||
Implement @mention direct messages to specific models (M8 milestone).
|
||||
|
||||
Purpose: Allow users to direct questions/comments to specific models during discussions.
|
||||
Output: Working @claude, @gpt, @gemini message handlers that query specific models with context.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
~/.claude/get-shit-done/workflows/execute-phase.md
|
||||
~/.claude/get-shit-done/templates/summary.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/STATE.md
|
||||
|
||||
# Prior plan context:
|
||||
@.planning/phases/05-multi-model-discussions/05-03-SUMMARY.md
|
||||
|
||||
# Key files:
|
||||
@src/moai/core/orchestrator.py
|
||||
@src/moai/core/ai_client.py
|
||||
@src/moai/core/services/discussion.py
|
||||
@src/moai/bot/handlers/discussion.py
|
||||
|
||||
**Tech stack available:** python-telegram-bot (MessageHandler with filters), openai (async)
|
||||
**Established patterns:** build_context, AIClient.complete, typing indicator, Message(is_direct=True)
|
||||
**Constraining decisions:**
|
||||
- 05-03: Context building for discussions established
|
||||
- 04-02: Typing indicator pattern
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: Add direct message function to orchestrator</name>
|
||||
<files>src/moai/core/orchestrator.py</files>
|
||||
<action>Add to orchestrator.py:
|
||||
- async query_model_direct(model: str, message: str, discussion: Discussion | None, project_name: str) -> str
|
||||
- Calls single model via AIClient.complete()
|
||||
- If discussion provided, includes full context via build_context()
|
||||
- System prompt includes "This is a direct message to you specifically"
|
||||
- Returns model response
|
||||
- Handles errors gracefully (returns error message string)
|
||||
|
||||
This is similar to /ask but with optional discussion context.</action>
|
||||
<verify>Import orchestrator, verify query_model_direct signature exists</verify>
|
||||
<done>orchestrator.py has query_model_direct function for single model with optional context</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: Implement @mention message handler</name>
|
||||
<files>src/moai/bot/handlers/discussion.py, src/moai/bot/handlers/__init__.py</files>
|
||||
<action>Add to discussion.py:
|
||||
- mention_handler(update, context) for messages starting with @model
|
||||
- Use regex filter: MessageHandler(filters.Regex(r'^@(claude|gpt|gemini)\s'), mention_handler)
|
||||
- Parse model name from first word (strip @)
|
||||
- Rest of message is the content
|
||||
- Get active discussion if exists (for context), otherwise just query with project context
|
||||
- Call query_model_direct with discussion context
|
||||
- If discussion active: create Message(is_direct=True) to persist
|
||||
- Display: "**@Model (direct):**\n> response"
|
||||
- Show typing indicator while waiting
|
||||
|
||||
Register MessageHandler in __init__.py AFTER CommandHandlers (order matters for telegram-bot).</action>
|
||||
<verify>With active discussion, send "@claude What do you think?", verify response with context</verify>
|
||||
<done>@mention messages route to specific model with full discussion context, marked is_direct=True</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 3: Update status to show active discussion info</name>
|
||||
<files>src/moai/bot/handlers/status.py</files>
|
||||
<action>Update status_command to show:
|
||||
- If discussion_state exists in user_data:
|
||||
- "Active discussion: Round N/M"
|
||||
- "Discussion ID: {short_id}"
|
||||
- Show count of messages in current discussion
|
||||
- Use get_active_discussion service if user_data cleared but DB has active
|
||||
|
||||
This helps users know their current discussion state.</action>
|
||||
<verify>/status shows active discussion info during a discussion session</verify>
|
||||
<done>/status displays current discussion state (round progress, message count)</done>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
Before declaring plan complete:
|
||||
- [ ] @claude, @gpt, @gemini messages work
|
||||
- [ ] Direct messages include discussion context when active
|
||||
- [ ] Messages marked is_direct=True in database
|
||||
- [ ] /status shows active discussion info
|
||||
- [ ] Works without active discussion (just project context)
|
||||
- [ ] M8 milestone (@mention direct messages) complete
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
|
||||
- @mention syntax routes to specific models
|
||||
- Full discussion context passed when available
|
||||
- Direct messages persisted with is_direct flag
|
||||
- /status shows discussion state
|
||||
- M8 milestone complete
|
||||
- Phase 5 complete (M4, M5, M8 all done)
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/05-multi-model-discussions/05-04-SUMMARY.md`
|
||||
|
||||
Note: This is the final plan for Phase 5. Success criteria for Phase 5:
|
||||
- M4: Open mode (parallel) ✓ (05-02)
|
||||
- M5: Discuss mode (sequential rounds) ✓ (05-03)
|
||||
- M8: @mention direct messages ✓ (05-04)
|
||||
</output>
|
||||
Loading…
Add table
Reference in a new issue