diff --git a/.planning/phases/04-single-model-qa/04-01-PLAN.md b/.planning/phases/04-single-model-qa/04-01-PLAN.md
new file mode 100644
index 0000000..1425a46
--- /dev/null
+++ b/.planning/phases/04-single-model-qa/04-01-PLAN.md
@@ -0,0 +1,141 @@
+---
+phase: 04-single-model-qa
+plan: 01
+type: execute
+---
+
+
+Create AI client abstraction layer supporting Requesty and OpenRouter as model routers.
+
+Purpose: Establish the foundation for all AI model interactions - single queries, multi-model discussions, and consensus generation all flow through this client.
+Output: Working ai_client.py that can send prompts to any model via Requesty or OpenRouter.
+
+
+
+~/.claude/get-shit-done/workflows/execute-phase.md
+~/.claude/get-shit-done/templates/summary.md
+
+
+
+@.planning/PROJECT.md
+@.planning/ROADMAP.md
+@.planning/STATE.md
+@.planning/phases/03-project-crud/03-03-SUMMARY.md
+
+# Key files:
+@src/moai/bot/config.py
+@src/moai/core/models.py
+
+# From discovery (no DISCOVERY.md needed - Level 1):
+# Both Requesty and OpenRouter are OpenAI SDK compatible:
+# - Requesty: base_url="https://router.requesty.ai/v1", model format "provider/model-name"
+# - OpenRouter: base_url="https://openrouter.ai/api/v1", needs HTTP-Referer header
+# Can use `openai` package with different base_url/headers
+
+**Tech available:**
+- python-telegram-bot, sqlalchemy, httpx, aiosqlite
+- pytest, pytest-asyncio
+
+**Established patterns:**
+- Service layer in core/services/
+- Config loading from environment in bot/config.py
+- Async functions throughout
+
+**Constraining decisions:**
+- AI client as abstraction layer (PROJECT.md)
+- httpx for API calls (SPEC.md)
+
+
+
+
+
+ Task 1: Add openai dependency and extend config
+ pyproject.toml, src/moai/bot/config.py
+
+1. Add `openai` to dependencies in pyproject.toml (unpinned per project standards)
+2. Extend Config class in bot/config.py with:
+ - AI_ROUTER: str (env var, default "requesty") - which router to use
+ - AI_API_KEY: str (env var) - API key for the router
+ - AI_REFERER: str | None (env var, optional) - for OpenRouter's HTTP-Referer requirement
+
+Note: Use existing pattern of loading from env with os.getenv(). No need for pydantic or complex validation - keep it simple like existing Config class.
+
+ python -c "from moai.bot.config import Config; c = Config(); print(c.AI_ROUTER)"
+ Config has AI_ROUTER, AI_API_KEY, AI_REFERER attributes; openai in dependencies
+
+
+
+ Task 2: Create AI client abstraction
+ src/moai/core/ai_client.py
+
+Create ai_client.py with:
+
+1. AIClient class that wraps OpenAI AsyncOpenAI client:
+ ```python
+ class AIClient:
+ def __init__(self, router: str, api_key: str, referer: str | None = None):
+ # Set base_url based on router ("requesty" or "openrouter")
+ # Store referer for OpenRouter
+ # Create AsyncOpenAI client with base_url and api_key
+ ```
+
+2. Async method for single completion:
+ ```python
+ async def complete(self, model: str, messages: list[dict], system_prompt: str | None = None) -> str:
+ # Build messages list with optional system prompt
+ # Call client.chat.completions.create()
+ # Add extra_headers with HTTP-Referer if OpenRouter and referer set
+ # Return response.choices[0].message.content
+ ```
+
+3. Model name normalization:
+ - For Requesty: model names need provider prefix (e.g., "claude" -> "anthropic/claude-sonnet-4-20250514")
+ - For OpenRouter: similar format
+ - Create MODEL_MAP dict with our short names -> full model identifiers
+ - MODEL_MAP = {"claude": "anthropic/claude-sonnet-4-20250514", "gpt": "openai/gpt-4o", "gemini": "google/gemini-2.0-flash"}
+
+4. Module-level convenience function:
+ ```python
+ _client: AIClient | None = None
+
+ def init_ai_client(config: Config) -> AIClient:
+ global _client
+ _client = AIClient(config.AI_ROUTER, config.AI_API_KEY, config.AI_REFERER)
+ return _client
+
+ def get_ai_client() -> AIClient:
+ if _client is None:
+ raise RuntimeError("AI client not initialized")
+ return _client
+ ```
+
+Keep it minimal - no retry logic, no streaming (yet), no complex error handling. This is the foundation; complexity comes later as needed.
+
+ python -c "from moai.core.ai_client import AIClient, MODEL_MAP; print(MODEL_MAP)"
+ AIClient class exists with complete() method, MODEL_MAP has claude/gpt/gemini mappings
+
+
+
+
+
+Before declaring plan complete:
+- [ ] `uv sync` installs openai package
+- [ ] Config loads AI settings from environment
+- [ ] AIClient can be instantiated with router/key
+- [ ] MODEL_MAP contains claude, gpt, gemini mappings
+- [ ] `ruff check src` passes
+
+
+
+
+- openai package in dependencies
+- Config extended with AI_ROUTER, AI_API_KEY, AI_REFERER
+- AIClient class with complete() method
+- MODEL_MAP with short name -> full model mappings
+- Module-level init_ai_client/get_ai_client functions
+- All code follows project conventions (type hints, docstrings)
+
+
+
diff --git a/.planning/phases/04-single-model-qa/04-02-PLAN.md b/.planning/phases/04-single-model-qa/04-02-PLAN.md
new file mode 100644
index 0000000..5ebd07f
--- /dev/null
+++ b/.planning/phases/04-single-model-qa/04-02-PLAN.md
@@ -0,0 +1,189 @@
+---
+phase: 04-single-model-qa
+plan: 02
+type: execute
+---
+
+
+Implement /ask command for single model Q&A and integrate AI client into bot lifecycle.
+
+Purpose: Complete M3 milestone - users can ask questions to individual AI models through Telegram.
+Output: Working /ask command that sends questions to AI models and returns responses.
+
+
+
+~/.claude/get-shit-done/workflows/execute-phase.md
+~/.claude/get-shit-done/templates/summary.md
+
+
+
+@.planning/PROJECT.md
+@.planning/ROADMAP.md
+@.planning/STATE.md
+@.planning/phases/04-single-model-qa/04-01-SUMMARY.md (will exist after 04-01)
+
+# Key files:
+@src/moai/bot/main.py
+@src/moai/bot/config.py
+@src/moai/bot/handlers/__init__.py
+@src/moai/bot/handlers/projects.py
+@src/moai/core/ai_client.py (will exist after 04-01)
+
+**Tech available:**
+- python-telegram-bot, sqlalchemy, httpx, aiosqlite, openai (added in 04-01)
+- AI client abstraction (04-01)
+
+**Established patterns:**
+- Handler registration in handlers/__init__.py
+- get_selected_project() helper in projects.py
+- Module-level init pattern (database.py, ai_client.py)
+- Async command handlers
+
+**Constraining decisions:**
+- Thin handlers delegating to core (CLAUDE.md)
+- Service layer for business logic
+
+
+
+
+
+ Task 1: Integrate AI client into bot lifecycle
+ src/moai/bot/main.py
+
+Modify main.py to initialize AI client during bot startup:
+
+1. Import init_ai_client from moai.core.ai_client
+2. In post_init callback (or create one if needed), after database init:
+ - Call init_ai_client(config)
+ - Log "AI client initialized with {config.AI_ROUTER}"
+
+Follow existing pattern - database is initialized in post_init, AI client goes right after.
+Keep error handling minimal - if AI_API_KEY is missing, let it fail at first use rather than at startup (user may just want to test bot commands first).
+
+ python -c "import moai.bot.main; print('main imports ok')"
+ main.py imports and initializes AI client alongside database in post_init
+
+
+
+ Task 2: Create /ask handler for single model queries
+ src/moai/bot/handlers/discussion.py, src/moai/bot/handlers/__init__.py
+
+Create discussion.py with /ask handler:
+
+1. Create src/moai/bot/handlers/discussion.py:
+ ```python
+ """Discussion handlers for MoAI bot."""
+ from telegram import Update
+ from telegram.ext import ContextTypes
+
+ from moai.bot.handlers.projects import get_selected_project
+ from moai.core.ai_client import get_ai_client, MODEL_MAP
+
+ async def ask_command(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
+ """Handle /ask command.
+
+ Examples:
+ /ask claude What is Python?
+ /ask gpt Explain async/await
+ """
+ args = context.args or []
+
+ if len(args) < 2:
+ available = ", ".join(MODEL_MAP.keys())
+ await update.message.reply_text(
+ f"Usage: /ask \n"
+ f"Available models: {available}\n\n"
+ f"Example: /ask claude What is Python?"
+ )
+ return
+
+ model_name = args[0].lower()
+ question = " ".join(args[1:])
+
+ # Validate model
+ if model_name not in MODEL_MAP:
+ available = ", ".join(MODEL_MAP.keys())
+ await update.message.reply_text(
+ f"Unknown model: {model_name}\n"
+ f"Available: {available}"
+ )
+ return
+
+ # Get project context if available (optional for /ask)
+ project = await get_selected_project(context)
+ project_context = f"Project: {project.name}\n" if project else ""
+
+ # Send "typing" indicator while waiting for AI
+ await update.message.chat.send_action("typing")
+
+ try:
+ client = get_ai_client()
+ response = await client.complete(
+ model=model_name,
+ messages=[{"role": "user", "content": question}],
+ system_prompt=f"{project_context}You are a helpful AI assistant."
+ )
+
+ # Format response with model name
+ await update.message.reply_text(
+ f"*{model_name.title()}:*\n\n{response}",
+ parse_mode="Markdown"
+ )
+ except Exception as e:
+ await update.message.reply_text(f"Error: {e}")
+ ```
+
+2. Update handlers/__init__.py:
+ - Import ask_command from discussion
+ - Add CommandHandler("ask", ask_command) to register_handlers
+
+ python -c "from moai.bot.handlers.discussion import ask_command; print('handler ok')"
+ /ask handler registered, validates model name, sends typing indicator, calls AI client
+
+
+
+ Task 3: Update help text and status
+ src/moai/bot/handlers/commands.py, src/moai/bot/handlers/status.py
+
+1. Update HELP_TEXT in commands.py to include:
+ ```
+ *Questions*
+ /ask - Ask a single model
+ ```
+ Add after the Project Management section.
+
+2. Update status.py to show AI client status:
+ - Import get_ai_client (wrapped in try/except)
+ - In status_command, add a line showing AI router configured
+ - Example: "AI Router: requesty ✓" or "AI Router: not configured"
+
+ python -c "from moai.bot.handlers.commands import HELP_TEXT; print('/ask' in HELP_TEXT)"
+ Help shows /ask command, status shows AI router status
+
+
+
+
+
+Before declaring plan complete:
+- [ ] `ruff check src` passes
+- [ ] Bot starts without errors (with valid AI_API_KEY in env)
+- [ ] /help shows /ask command
+- [ ] /status shows AI router status
+- [ ] /ask without args shows usage
+- [ ] /ask with invalid model shows available models
+
+
+
+
+- AI client initialized in bot lifecycle
+- /ask command works with model validation
+- Help text updated with /ask
+- Status shows AI configuration
+- M3 milestone: Single model Q&A working
+
+
+