385 lines
14 KiB
Markdown
385 lines
14 KiB
Markdown
# AI Roundtable
|
|
|
|
A multi-AI collaborative brainstorming platform where multiple AI models (Claude, GPT, Gemini, etc.) discuss topics together, see each other's responses, and work toward consensus.
|
|
|
|
## Concept
|
|
|
|
Instead of asking one AI a question, ask a "team" of AIs. They each provide initial thoughts, then engage in a structured discussion where each can see and respond to the others' contributions. The result is a richer, more diverse exploration of ideas that gets summarized into actionable markdown documents.
|
|
|
|
## Core Features
|
|
|
|
### Projects/Context
|
|
- **Projects** are containers for related discussions
|
|
- Each project maintains its own context and conversation history
|
|
- Projects have configurable model teams (e.g., Claude + GPT + Gemini)
|
|
- CRUD operations for project management
|
|
- Projects persist and can be revisited/continued
|
|
|
|
### Two Question Modes
|
|
|
|
#### 1. Open Question (`/open`)
|
|
- Question sent to ALL models in parallel
|
|
- Each model responds independently without seeing others
|
|
- Fast way to get diverse initial perspectives
|
|
- Results displayed together for comparison
|
|
|
|
#### 2. Discussion Mode (`/discuss [rounds]`)
|
|
- Sequential turn-based discussion
|
|
- Each model sees the full conversation history including other models' responses
|
|
- Models can agree, disagree, build upon, or challenge each other
|
|
- Continues until:
|
|
- Consensus is reached (detected by system)
|
|
- Round limit is hit (user-defined, e.g., `/discuss 5`)
|
|
- User manually stops (`/stop`)
|
|
|
|
### Iteration Flow
|
|
```
|
|
┌─────────────────────────────────────────────────────────────┐
|
|
│ /open "Initial question" │
|
|
│ ↓ │
|
|
│ [Parallel responses from all models] │
|
|
│ ↓ │
|
|
│ /discuss 5 │
|
|
│ ↓ │
|
|
│ [5 rounds of sequential discussion] │
|
|
│ ↓ │
|
|
│ /consensus │
|
|
│ ↓ │
|
|
│ [AI-generated summary of agreements/disagreements] │
|
|
│ ↓ │
|
|
│ /open "Follow-up question" (includes consensus as context) │
|
|
│ ↓ │
|
|
│ [Cycle continues...] │
|
|
│ ↓ │
|
|
│ /export → Final markdown document │
|
|
└─────────────────────────────────────────────────────────────┘
|
|
```
|
|
|
|
### Direct Model Interaction
|
|
- `@claude <message>` - Direct a question/comment to a specific model
|
|
- Other models see this exchange as part of the context
|
|
- Useful for drilling down on a specific model's point
|
|
|
|
## Architecture
|
|
|
|
```
|
|
┌────────────────────────────────────────────────────────────────┐
|
|
│ Web UI (Phase 2) │
|
|
│ - Project CRUD, settings, model config │
|
|
│ - Full discussion timeline view │
|
|
│ - Export/download markdown │
|
|
│ - Rich text, syntax highlighting │
|
|
│ - Drag-drop, edit, reorder │
|
|
└────────────────────────────────────────────────────────────────┘
|
|
│
|
|
▼
|
|
┌────────────────────────────────────────────────────────────────┐
|
|
│ Python Backend (FastAPI) │
|
|
│ │
|
|
│ ┌──────────────────────────────────────────────────────────┐ │
|
|
│ │ API Layer │ │
|
|
│ │ /projects, /discussions, /messages, /export │ │
|
|
│ └──────────────────────────────────────────────────────────┘ │
|
|
│ │ │
|
|
│ ┌─────────────┐ ┌─────────────┐ ┌──────────────────────┐ │
|
|
│ │ Projects │ │ Discussions │ │ AI Orchestrator │ │
|
|
│ │ Service │ │ Service │ │ │ │
|
|
│ │ │ │ │ │ - Round management │ │
|
|
│ │ - CRUD │ │ - Open mode │ │ - Context building │ │
|
|
│ │ - Settings │ │ - Discuss │ │ - Consensus detect │ │
|
|
│ │ - Export │ │ - History │ │ - Model routing │ │
|
|
│ └─────────────┘ └─────────────┘ └──────────────────────┘ │
|
|
│ │ │
|
|
│ ┌───────────────┴───────────────┐ │
|
|
│ │ Database (SQLite) │ │
|
|
│ │ projects, discussions, │ │
|
|
│ │ messages, consensus │ │
|
|
│ └───────────────────────────────┘ │
|
|
└────────────────────────────────────────────────────────────────┘
|
|
│ │
|
|
▼ ▼
|
|
┌──────────────────┐ ┌─────────────────────────┐
|
|
│ Telegram Bot │ │ Requesty / OpenRouter │
|
|
│ (thin client) │ │ │
|
|
│ │ │ - Claude (Anthropic) │
|
|
│ - Commands only │ │ - GPT (OpenAI) │
|
|
│ - No state │ │ - Gemini (Google) │
|
|
│ - Calls API │ │ - Llama, Mistral, etc │
|
|
└──────────────────┘ └─────────────────────────┘
|
|
```
|
|
|
|
## Data Model
|
|
|
|
```
|
|
Project
|
|
├── id: uuid
|
|
├── name: string
|
|
├── created_at: timestamp
|
|
├── updated_at: timestamp
|
|
├── models: ["claude-opus-4", "gpt-4o", "gemini-2-pro"]
|
|
├── settings:
|
|
│ ├── default_rounds: int (default: 3)
|
|
│ ├── consensus_threshold: float (optional, for auto-detect)
|
|
│ └── system_prompt_override: string (optional)
|
|
└── discussions: Discussion[]
|
|
|
|
Discussion
|
|
├── id: uuid
|
|
├── project_id: uuid (FK)
|
|
├── question: string
|
|
├── type: "open" | "discuss"
|
|
├── status: "active" | "completed"
|
|
├── created_at: timestamp
|
|
├── rounds: Round[]
|
|
└── consensus: Consensus (nullable)
|
|
|
|
Round
|
|
├── id: uuid
|
|
├── discussion_id: uuid (FK)
|
|
├── round_number: int
|
|
├── type: "parallel" | "sequential"
|
|
└── messages: Message[]
|
|
|
|
Message
|
|
├── id: uuid
|
|
├── round_id: uuid (FK)
|
|
├── model: string ("claude", "gpt", "gemini")
|
|
├── content: text
|
|
├── timestamp: timestamp
|
|
└── is_direct: boolean (true if @mentioned)
|
|
|
|
Consensus
|
|
├── id: uuid
|
|
├── discussion_id: uuid (FK)
|
|
├── agreements: string[] (bullet points)
|
|
├── disagreements: [{topic, positions: {model: position}}]
|
|
├── generated_at: timestamp
|
|
└── generated_by: string (which model summarized)
|
|
```
|
|
|
|
## Telegram Commands (Phase 1)
|
|
|
|
### Project Management
|
|
```
|
|
/projects - List all projects
|
|
/project new "Name" - Create new project
|
|
/project select <id|name> - Switch to project
|
|
/project delete <id> - Delete project
|
|
/project models claude,gpt - Set models for current project
|
|
/project info - Show current project details
|
|
```
|
|
|
|
### Discussion
|
|
```
|
|
/open <question> - Ask all models (parallel)
|
|
/discuss [rounds] - Start sequential discussion (default: 3 rounds)
|
|
/next - Trigger next round manually
|
|
/stop - Stop current discussion
|
|
@claude <message> - Direct message to specific model
|
|
@gpt <message>
|
|
@gemini <message>
|
|
```
|
|
|
|
### Output
|
|
```
|
|
/consensus - Generate consensus summary
|
|
/summarize - Summarize current discussion
|
|
/export - Export full project as markdown
|
|
/export discussion - Export current discussion only
|
|
```
|
|
|
|
### Utility
|
|
```
|
|
/status - Show current project/discussion state
|
|
/history [n] - Show last n messages
|
|
/clear - Clear current discussion (confirm required)
|
|
/help - Show commands
|
|
```
|
|
|
|
## System Prompts
|
|
|
|
### Base System Prompt (all models)
|
|
```
|
|
You are participating in a roundtable discussion with other AI models.
|
|
Other participants: {list of models}
|
|
Current topic: {project name}
|
|
Question: {current question}
|
|
|
|
Guidelines:
|
|
- Be concise but substantive
|
|
- You can agree, disagree, or build upon others' points
|
|
- Reference other models by name when responding to their points
|
|
- Focus on practical, actionable insights
|
|
- If you reach agreement with others, state it clearly
|
|
```
|
|
|
|
### Consensus Detection Prompt
|
|
```
|
|
Review the discussion above. Identify:
|
|
1. Points where all models agree
|
|
2. Points where models disagree (list each position)
|
|
3. Open questions that weren't resolved
|
|
|
|
Format as structured JSON.
|
|
```
|
|
|
|
## Export Format
|
|
|
|
```markdown
|
|
# {Project Name}
|
|
|
|
**Date:** {date}
|
|
**Models:** Claude Opus 4, GPT-4o, Gemini 2 Pro
|
|
**Discussions:** {count}
|
|
|
|
---
|
|
|
|
## Discussion 1: {Question}
|
|
|
|
### Initial Responses (Open)
|
|
|
|
**Claude:**
|
|
> {response}
|
|
|
|
**GPT:**
|
|
> {response}
|
|
|
|
**Gemini:**
|
|
> {response}
|
|
|
|
### Discussion (5 rounds)
|
|
|
|
**Round 1:**
|
|
|
|
**Claude:** {response}
|
|
|
|
**GPT:** {response}
|
|
|
|
**Gemini:** {response}
|
|
|
|
**Round 2:**
|
|
...
|
|
|
|
### Consensus
|
|
|
|
**Agreements:**
|
|
- {point 1}
|
|
- {point 2}
|
|
|
|
**Disagreements:**
|
|
- **Topic:** {topic}
|
|
- Claude: {position}
|
|
- GPT: {position}
|
|
|
|
**Open Questions:**
|
|
- {question}
|
|
|
|
---
|
|
|
|
## Discussion 2: {Follow-up Question}
|
|
...
|
|
|
|
---
|
|
|
|
## Final Summary
|
|
|
|
{AI-generated executive summary of all discussions}
|
|
```
|
|
|
|
## Phase 1: Telegram POC
|
|
|
|
### Scope
|
|
- Single-user (your Telegram only)
|
|
- SQLite database
|
|
- Basic project CRUD via commands
|
|
- Open and discuss modes working
|
|
- Export to markdown (sent as document)
|
|
- Requesty/OpenRouter integration
|
|
|
|
### Tech Stack
|
|
- Python 3.11+
|
|
- python-telegram-bot (async)
|
|
- FastAPI (internal API, optional for phase 1)
|
|
- SQLite + SQLAlchemy
|
|
- httpx for API calls
|
|
- Requesty or OpenRouter for model routing
|
|
|
|
### File Structure
|
|
```
|
|
ai-roundtable/
|
|
├── bot/
|
|
│ ├── __init__.py
|
|
│ ├── main.py # Bot entry point
|
|
│ ├── handlers/
|
|
│ │ ├── projects.py # /project commands
|
|
│ │ ├── discussion.py # /open, /discuss, @mentions
|
|
│ │ └── export.py # /export, /consensus
|
|
│ └── middleware.py # Auth, logging
|
|
├── core/
|
|
│ ├── __init__.py
|
|
│ ├── models.py # SQLAlchemy models
|
|
│ ├── database.py # DB connection
|
|
│ ├── orchestrator.py # AI round management
|
|
│ ├── ai_client.py # Requesty/OpenRouter wrapper
|
|
│ └── exporter.py # Markdown generation
|
|
├── config.py # Settings, API keys
|
|
├── requirements.txt
|
|
└── README.md
|
|
```
|
|
|
|
### MVP Milestones
|
|
1. **M1:** Bot responds to /help, /status
|
|
2. **M2:** Project CRUD working
|
|
3. **M3:** Single model Q&A working (no discussion yet)
|
|
4. **M4:** Open mode (parallel) working with multiple models
|
|
5. **M5:** Discuss mode (sequential) working
|
|
6. **M6:** Consensus generation working
|
|
7. **M7:** Export to markdown working
|
|
8. **M8:** @mention direct messages working
|
|
|
|
## Phase 2: Web UI
|
|
|
|
### Additional Features
|
|
- Visual timeline of discussions
|
|
- Project dashboard
|
|
- Model performance/cost tracking
|
|
- Edit/delete individual messages
|
|
- Branching discussions (explore alternatives)
|
|
- Share/collaborate (multi-user)
|
|
- Templates for common brainstorm types
|
|
|
|
### Tech Stack Options
|
|
- **Frontend:** Vue 3 + Vite or React + Next.js
|
|
- **Backend:** FastAPI (expand from Phase 1)
|
|
- **Database:** PostgreSQL (upgrade from SQLite)
|
|
- **Auth:** Simple token or OAuth
|
|
- **Hosting:** Dockge/Docker on homelab
|
|
|
|
## Cost Considerations
|
|
|
|
Each discussion round involves multiple API calls:
|
|
- Open question: N calls (one per model)
|
|
- Discussion round: N calls
|
|
- Consensus: 1 call
|
|
|
|
Example: 3 models, 1 open + 5 discuss rounds + consensus = 3 + 15 + 1 = 19 API calls
|
|
|
|
Use Requesty/OpenRouter for:
|
|
- Single API key for all models
|
|
- Cost tracking
|
|
- Fallback routing
|
|
- Rate limit handling
|
|
|
|
## Future Ideas
|
|
|
|
- **Personas:** Configure models with different perspectives (optimist, critic, pragmatist)
|
|
- **Voting:** Models vote on proposals, user sees tally
|
|
- **Memory:** Cross-project context (reference past conclusions)
|
|
- **Triggers:** Automated discussions on schedule or webhook
|
|
- **Voice:** Voice memos transcribed and fed to discussion
|
|
|
|
---
|
|
|
|
*Created: 2026-01-16*
|
|
*Status: Planning*
|
|
*Phase: 1 (Telegram POC)*
|