docs(04): create phase plans for single model Q&A
Phase 04: Single Model Q&A - 2 plans created - 5 total tasks defined - Ready for execution Plans: - 04-01: AI client abstraction (openai dep, config, AIClient class) - 04-02: /ask handler and bot integration (M3 milestone) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
parent
6ec3d069d8
commit
4ea13efe8f
2 changed files with 330 additions and 0 deletions
141
.planning/phases/04-single-model-qa/04-01-PLAN.md
Normal file
141
.planning/phases/04-single-model-qa/04-01-PLAN.md
Normal file
|
|
@ -0,0 +1,141 @@
|
|||
---
|
||||
phase: 04-single-model-qa
|
||||
plan: 01
|
||||
type: execute
|
||||
---
|
||||
|
||||
<objective>
|
||||
Create AI client abstraction layer supporting Requesty and OpenRouter as model routers.
|
||||
|
||||
Purpose: Establish the foundation for all AI model interactions - single queries, multi-model discussions, and consensus generation all flow through this client.
|
||||
Output: Working ai_client.py that can send prompts to any model via Requesty or OpenRouter.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
~/.claude/get-shit-done/workflows/execute-phase.md
|
||||
~/.claude/get-shit-done/templates/summary.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/STATE.md
|
||||
@.planning/phases/03-project-crud/03-03-SUMMARY.md
|
||||
|
||||
# Key files:
|
||||
@src/moai/bot/config.py
|
||||
@src/moai/core/models.py
|
||||
|
||||
# From discovery (no DISCOVERY.md needed - Level 1):
|
||||
# Both Requesty and OpenRouter are OpenAI SDK compatible:
|
||||
# - Requesty: base_url="https://router.requesty.ai/v1", model format "provider/model-name"
|
||||
# - OpenRouter: base_url="https://openrouter.ai/api/v1", needs HTTP-Referer header
|
||||
# Can use `openai` package with different base_url/headers
|
||||
|
||||
**Tech available:**
|
||||
- python-telegram-bot, sqlalchemy, httpx, aiosqlite
|
||||
- pytest, pytest-asyncio
|
||||
|
||||
**Established patterns:**
|
||||
- Service layer in core/services/
|
||||
- Config loading from environment in bot/config.py
|
||||
- Async functions throughout
|
||||
|
||||
**Constraining decisions:**
|
||||
- AI client as abstraction layer (PROJECT.md)
|
||||
- httpx for API calls (SPEC.md)
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: Add openai dependency and extend config</name>
|
||||
<files>pyproject.toml, src/moai/bot/config.py</files>
|
||||
<action>
|
||||
1. Add `openai` to dependencies in pyproject.toml (unpinned per project standards)
|
||||
2. Extend Config class in bot/config.py with:
|
||||
- AI_ROUTER: str (env var, default "requesty") - which router to use
|
||||
- AI_API_KEY: str (env var) - API key for the router
|
||||
- AI_REFERER: str | None (env var, optional) - for OpenRouter's HTTP-Referer requirement
|
||||
|
||||
Note: Use existing pattern of loading from env with os.getenv(). No need for pydantic or complex validation - keep it simple like existing Config class.
|
||||
</action>
|
||||
<verify>python -c "from moai.bot.config import Config; c = Config(); print(c.AI_ROUTER)"</verify>
|
||||
<done>Config has AI_ROUTER, AI_API_KEY, AI_REFERER attributes; openai in dependencies</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: Create AI client abstraction</name>
|
||||
<files>src/moai/core/ai_client.py</files>
|
||||
<action>
|
||||
Create ai_client.py with:
|
||||
|
||||
1. AIClient class that wraps OpenAI AsyncOpenAI client:
|
||||
```python
|
||||
class AIClient:
|
||||
def __init__(self, router: str, api_key: str, referer: str | None = None):
|
||||
# Set base_url based on router ("requesty" or "openrouter")
|
||||
# Store referer for OpenRouter
|
||||
# Create AsyncOpenAI client with base_url and api_key
|
||||
```
|
||||
|
||||
2. Async method for single completion:
|
||||
```python
|
||||
async def complete(self, model: str, messages: list[dict], system_prompt: str | None = None) -> str:
|
||||
# Build messages list with optional system prompt
|
||||
# Call client.chat.completions.create()
|
||||
# Add extra_headers with HTTP-Referer if OpenRouter and referer set
|
||||
# Return response.choices[0].message.content
|
||||
```
|
||||
|
||||
3. Model name normalization:
|
||||
- For Requesty: model names need provider prefix (e.g., "claude" -> "anthropic/claude-sonnet-4-20250514")
|
||||
- For OpenRouter: similar format
|
||||
- Create MODEL_MAP dict with our short names -> full model identifiers
|
||||
- MODEL_MAP = {"claude": "anthropic/claude-sonnet-4-20250514", "gpt": "openai/gpt-4o", "gemini": "google/gemini-2.0-flash"}
|
||||
|
||||
4. Module-level convenience function:
|
||||
```python
|
||||
_client: AIClient | None = None
|
||||
|
||||
def init_ai_client(config: Config) -> AIClient:
|
||||
global _client
|
||||
_client = AIClient(config.AI_ROUTER, config.AI_API_KEY, config.AI_REFERER)
|
||||
return _client
|
||||
|
||||
def get_ai_client() -> AIClient:
|
||||
if _client is None:
|
||||
raise RuntimeError("AI client not initialized")
|
||||
return _client
|
||||
```
|
||||
|
||||
Keep it minimal - no retry logic, no streaming (yet), no complex error handling. This is the foundation; complexity comes later as needed.
|
||||
</action>
|
||||
<verify>python -c "from moai.core.ai_client import AIClient, MODEL_MAP; print(MODEL_MAP)"</verify>
|
||||
<done>AIClient class exists with complete() method, MODEL_MAP has claude/gpt/gemini mappings</done>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
Before declaring plan complete:
|
||||
- [ ] `uv sync` installs openai package
|
||||
- [ ] Config loads AI settings from environment
|
||||
- [ ] AIClient can be instantiated with router/key
|
||||
- [ ] MODEL_MAP contains claude, gpt, gemini mappings
|
||||
- [ ] `ruff check src` passes
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
|
||||
- openai package in dependencies
|
||||
- Config extended with AI_ROUTER, AI_API_KEY, AI_REFERER
|
||||
- AIClient class with complete() method
|
||||
- MODEL_MAP with short name -> full model mappings
|
||||
- Module-level init_ai_client/get_ai_client functions
|
||||
- All code follows project conventions (type hints, docstrings)
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/04-single-model-qa/04-01-SUMMARY.md`
|
||||
</output>
|
||||
189
.planning/phases/04-single-model-qa/04-02-PLAN.md
Normal file
189
.planning/phases/04-single-model-qa/04-02-PLAN.md
Normal file
|
|
@ -0,0 +1,189 @@
|
|||
---
|
||||
phase: 04-single-model-qa
|
||||
plan: 02
|
||||
type: execute
|
||||
---
|
||||
|
||||
<objective>
|
||||
Implement /ask command for single model Q&A and integrate AI client into bot lifecycle.
|
||||
|
||||
Purpose: Complete M3 milestone - users can ask questions to individual AI models through Telegram.
|
||||
Output: Working /ask command that sends questions to AI models and returns responses.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
~/.claude/get-shit-done/workflows/execute-phase.md
|
||||
~/.claude/get-shit-done/templates/summary.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/STATE.md
|
||||
@.planning/phases/04-single-model-qa/04-01-SUMMARY.md (will exist after 04-01)
|
||||
|
||||
# Key files:
|
||||
@src/moai/bot/main.py
|
||||
@src/moai/bot/config.py
|
||||
@src/moai/bot/handlers/__init__.py
|
||||
@src/moai/bot/handlers/projects.py
|
||||
@src/moai/core/ai_client.py (will exist after 04-01)
|
||||
|
||||
**Tech available:**
|
||||
- python-telegram-bot, sqlalchemy, httpx, aiosqlite, openai (added in 04-01)
|
||||
- AI client abstraction (04-01)
|
||||
|
||||
**Established patterns:**
|
||||
- Handler registration in handlers/__init__.py
|
||||
- get_selected_project() helper in projects.py
|
||||
- Module-level init pattern (database.py, ai_client.py)
|
||||
- Async command handlers
|
||||
|
||||
**Constraining decisions:**
|
||||
- Thin handlers delegating to core (CLAUDE.md)
|
||||
- Service layer for business logic
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: Integrate AI client into bot lifecycle</name>
|
||||
<files>src/moai/bot/main.py</files>
|
||||
<action>
|
||||
Modify main.py to initialize AI client during bot startup:
|
||||
|
||||
1. Import init_ai_client from moai.core.ai_client
|
||||
2. In post_init callback (or create one if needed), after database init:
|
||||
- Call init_ai_client(config)
|
||||
- Log "AI client initialized with {config.AI_ROUTER}"
|
||||
|
||||
Follow existing pattern - database is initialized in post_init, AI client goes right after.
|
||||
Keep error handling minimal - if AI_API_KEY is missing, let it fail at first use rather than at startup (user may just want to test bot commands first).
|
||||
</action>
|
||||
<verify>python -c "import moai.bot.main; print('main imports ok')"</verify>
|
||||
<done>main.py imports and initializes AI client alongside database in post_init</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: Create /ask handler for single model queries</name>
|
||||
<files>src/moai/bot/handlers/discussion.py, src/moai/bot/handlers/__init__.py</files>
|
||||
<action>
|
||||
Create discussion.py with /ask handler:
|
||||
|
||||
1. Create src/moai/bot/handlers/discussion.py:
|
||||
```python
|
||||
"""Discussion handlers for MoAI bot."""
|
||||
from telegram import Update
|
||||
from telegram.ext import ContextTypes
|
||||
|
||||
from moai.bot.handlers.projects import get_selected_project
|
||||
from moai.core.ai_client import get_ai_client, MODEL_MAP
|
||||
|
||||
async def ask_command(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
|
||||
"""Handle /ask <model> <question> command.
|
||||
|
||||
Examples:
|
||||
/ask claude What is Python?
|
||||
/ask gpt Explain async/await
|
||||
"""
|
||||
args = context.args or []
|
||||
|
||||
if len(args) < 2:
|
||||
available = ", ".join(MODEL_MAP.keys())
|
||||
await update.message.reply_text(
|
||||
f"Usage: /ask <model> <question>\n"
|
||||
f"Available models: {available}\n\n"
|
||||
f"Example: /ask claude What is Python?"
|
||||
)
|
||||
return
|
||||
|
||||
model_name = args[0].lower()
|
||||
question = " ".join(args[1:])
|
||||
|
||||
# Validate model
|
||||
if model_name not in MODEL_MAP:
|
||||
available = ", ".join(MODEL_MAP.keys())
|
||||
await update.message.reply_text(
|
||||
f"Unknown model: {model_name}\n"
|
||||
f"Available: {available}"
|
||||
)
|
||||
return
|
||||
|
||||
# Get project context if available (optional for /ask)
|
||||
project = await get_selected_project(context)
|
||||
project_context = f"Project: {project.name}\n" if project else ""
|
||||
|
||||
# Send "typing" indicator while waiting for AI
|
||||
await update.message.chat.send_action("typing")
|
||||
|
||||
try:
|
||||
client = get_ai_client()
|
||||
response = await client.complete(
|
||||
model=model_name,
|
||||
messages=[{"role": "user", "content": question}],
|
||||
system_prompt=f"{project_context}You are a helpful AI assistant."
|
||||
)
|
||||
|
||||
# Format response with model name
|
||||
await update.message.reply_text(
|
||||
f"*{model_name.title()}:*\n\n{response}",
|
||||
parse_mode="Markdown"
|
||||
)
|
||||
except Exception as e:
|
||||
await update.message.reply_text(f"Error: {e}")
|
||||
```
|
||||
|
||||
2. Update handlers/__init__.py:
|
||||
- Import ask_command from discussion
|
||||
- Add CommandHandler("ask", ask_command) to register_handlers
|
||||
</action>
|
||||
<verify>python -c "from moai.bot.handlers.discussion import ask_command; print('handler ok')"</verify>
|
||||
<done>/ask handler registered, validates model name, sends typing indicator, calls AI client</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 3: Update help text and status</name>
|
||||
<files>src/moai/bot/handlers/commands.py, src/moai/bot/handlers/status.py</files>
|
||||
<action>
|
||||
1. Update HELP_TEXT in commands.py to include:
|
||||
```
|
||||
*Questions*
|
||||
/ask <model> <question> - Ask a single model
|
||||
```
|
||||
Add after the Project Management section.
|
||||
|
||||
2. Update status.py to show AI client status:
|
||||
- Import get_ai_client (wrapped in try/except)
|
||||
- In status_command, add a line showing AI router configured
|
||||
- Example: "AI Router: requesty ✓" or "AI Router: not configured"
|
||||
</action>
|
||||
<verify>python -c "from moai.bot.handlers.commands import HELP_TEXT; print('/ask' in HELP_TEXT)"</verify>
|
||||
<done>Help shows /ask command, status shows AI router status</done>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
Before declaring plan complete:
|
||||
- [ ] `ruff check src` passes
|
||||
- [ ] Bot starts without errors (with valid AI_API_KEY in env)
|
||||
- [ ] /help shows /ask command
|
||||
- [ ] /status shows AI router status
|
||||
- [ ] /ask without args shows usage
|
||||
- [ ] /ask with invalid model shows available models
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
|
||||
- AI client initialized in bot lifecycle
|
||||
- /ask command works with model validation
|
||||
- Help text updated with /ask
|
||||
- Status shows AI configuration
|
||||
- M3 milestone: Single model Q&A working
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/04-single-model-qa/04-02-SUMMARY.md`
|
||||
|
||||
Note: This completes Phase 4. Summary should note M3 milestone complete.
|
||||
</output>
|
||||
Loading…
Add table
Reference in a new issue