Phase 05: Multi-Model Discussions - 4 plans created - 11 total tasks defined - Covers M4 (open/parallel), M5 (discuss/sequential), M8 (@mentions) - Ready for execution Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
116 lines
4.5 KiB
Markdown
116 lines
4.5 KiB
Markdown
---
|
|
phase: 05-multi-model-discussions
|
|
plan: 02
|
|
type: execute
|
|
---
|
|
|
|
<objective>
|
|
Implement /open command for parallel multi-model queries (M4 milestone).
|
|
|
|
Purpose: Allow users to get parallel responses from all project models on a question.
|
|
Output: Working /open command that queries all models simultaneously and displays responses.
|
|
</objective>
|
|
|
|
<execution_context>
|
|
~/.claude/get-shit-done/workflows/execute-phase.md
|
|
~/.claude/get-shit-done/templates/summary.md
|
|
</execution_context>
|
|
|
|
<context>
|
|
@.planning/PROJECT.md
|
|
@.planning/ROADMAP.md
|
|
@.planning/STATE.md
|
|
|
|
# Prior plan context:
|
|
@.planning/phases/05-multi-model-discussions/05-01-SUMMARY.md
|
|
|
|
# Key files:
|
|
@src/moai/core/ai_client.py
|
|
@src/moai/bot/handlers/discussion.py
|
|
@src/moai/bot/handlers/projects.py
|
|
@src/moai/core/services/discussion.py
|
|
|
|
**Tech stack available:** python-telegram-bot, openai (async), sqlalchemy
|
|
**Established patterns:** Typing indicator, command validation, service layer, AIClient.complete()
|
|
**Constraining decisions:**
|
|
- 04-02: Typing indicator shown while waiting for AI response
|
|
- 04-01: OpenAI SDK for router abstraction (async calls)
|
|
</context>
|
|
|
|
<tasks>
|
|
|
|
<task type="auto">
|
|
<name>Task 1: Create orchestrator module with parallel query function</name>
|
|
<files>src/moai/core/orchestrator.py</files>
|
|
<action>Create orchestrator.py with:
|
|
- SYSTEM_PROMPT constant for roundtable discussion (from SPEC.md)
|
|
- async query_models_parallel(models: list[str], question: str, project_name: str) -> dict[str, str]
|
|
- Uses asyncio.gather() to call AIClient.complete() for all models simultaneously
|
|
- Returns dict mapping model name → response
|
|
- Handles individual model failures gracefully (returns error message for that model)
|
|
- Builds system prompt with "Other participants: {models}" and "Topic: {project_name}"
|
|
|
|
Do NOT build full discussion context yet - that's for discuss mode in 05-03.</action>
|
|
<verify>Import orchestrator, verify query_models_parallel signature and SYSTEM_PROMPT exists</verify>
|
|
<done>orchestrator.py exists with SYSTEM_PROMPT and query_models_parallel function using asyncio.gather</done>
|
|
</task>
|
|
|
|
<task type="auto">
|
|
<name>Task 2: Implement /open command handler with database persistence</name>
|
|
<files>src/moai/bot/handlers/discussion.py, src/moai/bot/handlers/__init__.py</files>
|
|
<action>Add to discussion.py:
|
|
- open_command(update, context) handler for "/open <question>"
|
|
- Requires selected project (error if none)
|
|
- Uses project's models list (error if empty)
|
|
- Creates Discussion(type=OPEN) and Round(type=PARALLEL, round_number=1) via discussion service
|
|
- Calls query_models_parallel() with project models
|
|
- Creates Message for each response
|
|
- Formats output: "**Model:**\n> response" for each model
|
|
- Shows typing indicator while waiting
|
|
|
|
Register /open handler in __init__.py with CommandHandler. Update HELP_TEXT in commands.py with /open usage.</action>
|
|
<verify>Run bot, use `/open What is Python?` with a selected project that has models configured</verify>
|
|
<done>/open queries all project models in parallel, persists to DB, displays formatted responses</done>
|
|
</task>
|
|
|
|
<task type="auto">
|
|
<name>Task 3: Update help text and status for multi-model support</name>
|
|
<files>src/moai/bot/handlers/commands.py</files>
|
|
<action>Update HELP_TEXT to add Discussion section:
|
|
```
|
|
**Discussion**
|
|
/open <question> - Ask all models (parallel)
|
|
/discuss [rounds] - Start discussion (default: 3)
|
|
/next - Next round manually
|
|
/stop - End discussion
|
|
@model <message> - Direct message to model
|
|
```
|
|
|
|
This documents commands for the full phase even though /discuss, /next, /stop, and @mentions are implemented in later plans.</action>
|
|
<verify>Run bot, /help shows Discussion section with all commands listed</verify>
|
|
<done>HELP_TEXT includes Discussion section with all multi-model commands</done>
|
|
</task>
|
|
|
|
</tasks>
|
|
|
|
<verification>
|
|
Before declaring plan complete:
|
|
- [ ] `python -c "from moai.core.orchestrator import query_models_parallel"` succeeds
|
|
- [ ] Bot responds to /open with parallel model responses
|
|
- [ ] Discussion/Round/Messages persisted to database after /open
|
|
- [ ] /help shows Discussion section
|
|
- [ ] Error handling for no project selected, no models configured
|
|
</verification>
|
|
|
|
<success_criteria>
|
|
|
|
- /open command works with parallel AI queries
|
|
- Responses persisted as Discussion → Round → Messages
|
|
- Typing indicator shown during queries
|
|
- Proper error messages for edge cases
|
|
- M4 milestone (Open mode parallel) complete
|
|
</success_criteria>
|
|
|
|
<output>
|
|
After completion, create `.planning/phases/05-multi-model-discussions/05-02-SUMMARY.md`
|
|
</output>
|