The Pipeline
How Synthezer transforms a rough idea into a verified, high-quality output through five gated stages.
What is the Pipeline?
The pipeline is Synthezer's core workflow. Instead of sending a single prompt to an AI and hoping for the best, Synthezer breaks the process into five sequential stages separated by gates. Each gate is a checkpoint — the AI must satisfy specific criteria before the pipeline advances to the next stage.
This gated approach prevents the most common failure mode of AI tools: the AI misunderstands your request early on and generates a long, detailed output that completely misses the point. In Synthezer, misunderstandings are caught at Stage 2, not after the AI has already done all the work.
The Five Stages
| Stage | Name | Who Acts | Purpose |
|---|---|---|---|
| 1 | Input | You | Write your prompt, select chips, attach images |
| 2 | Understanding | AI + You | AI asks clarifying questions until it fully understands your request |
| 3 | Research | AI + You | AI identifies research needs; you provide context and references |
| 4 | Implementation | AI + You | AI creates a detailed plan with a verification checklist |
| 5 | Output & PAE | AI | AI executes the plan and self-evaluates the result |
How Gates Work
A gate sits between every two stages. When a gate is processed, the AI receives all accumulated context from previous stages plus the user's chips, and produces the output needed for the next stage. Once a gate is processed, it locks — you cannot re-process it. This ensures the pipeline only moves forward and each stage builds on confirmed work.
The four gates are:
- Gate 1 (Stage 1 → 2): Submits your input and activates AI understanding
- Gate 2 (Stage 2 → 3): Confirms understanding and generates research directives
- Gate 3 (Stage 3 → 4): Completes research and creates the implementation plan
- Gate 4 (Stage 4 → 5): Executes the plan and produces the final output
Data Flow
Every stage accumulates context. By the time the AI produces the final output at Stage 5, it has access to:
- Your original prompt and chips (from Stage 1)
- The Master Prompt — a refined, unambiguous version of your request (from Stage 2)
- The Research Master Guide — synthesized domain knowledge (from Stage 3)
- The implementation plan and verification checklist (from Stage 4)
This layered accumulation is what makes pipeline outputs significantly more thorough than single-shot prompting.
Stage 1: Input
This is where everything begins. You provide three things: your prompt, your chips, and optionally, images.
The Prompt
Write a description of what you want the AI to accomplish. This can be anything from a simple question to a multi-page specification. Be as detailed or as brief as you want — the pipeline's understanding phase (Stage 2) will catch any gaps.
Chips
Chips are behavioral instructions that tell the AI how to work, not just what to do. They are organized into five categories:
| Category | Purpose | Example |
|---|---|---|
| AI Mind | How the AI should think and behave | PRECISION, CLARITY FIRST |
| Prerequisites | Steps the AI must complete before starting | ASK MISSING CONTEXT, READ CODEBASE |
| Implementation | What kind of work the AI will do | CODE GEN, DRAFTING |
| Tools | Tools the AI should assume are available | TERMINAL, BROWSER |
| No-Go Rules | Actions the AI must never take | NO SPECULATION, NO BREAKING CHANGES |
You select chips from your library by clicking them. Selected chips appear in their respective containers on the Stage 1 panel. Each chip has a description that the AI reads as a behavioral instruction — for example, the chip PRECISION might carry a description like "Prioritize exact, verifiable answers over general responses. Always cite sources when possible."
Chips are carried through every stage of the pipeline, not just Stage 1. The AI references your chips when asking questions (Stage 2), identifying research needs (Stage 3), creating its plan (Stage 4), and producing the final output (Stage 5). See the Chips documentation for a full guide.
Images
You can attach images to your pipeline input. These are sent to the AI as part of the Stage 2 understanding prompt, allowing it to analyze screenshots, diagrams, mockups, or any other visual context alongside your text prompt.
What Happens When You Submit
When you click Submit on Stage 1, the following sequence occurs:
- Your prompt, selected chips, and images are saved to the database.
- Gate 1 fires: the AI receives your full input along with chip definitions.
- The AI analyzes your request and either asks clarifying questions or confirms understanding immediately.
- Gate 1 locks. The pipeline advances to Stage 2.
- Chip usage counts are incremented for every chip you selected.
Gate 1: Input → Understanding
Gate 1 is the first AI interaction. The AI receives your prompt, all five categories of chips (resolved with their full descriptions), and any attached images. It then does one of two things:
If Something is Unclear
The AI returns with status questioning. It provides:
- Preliminary understanding: A summary of what the AI already understands about your request.
- Clarifying questions: Up to 5 specific, targeted questions about aspects that are ambiguous or underspecified.
Both the preliminary understanding and the questions are saved to the communication log so the AI can reference them later and avoid repeating itself.
If Everything is Clear
The AI returns with status confirmed and immediately produces a Master Prompt — a refined, self-contained version of your request. This happens when your original prompt is already precise enough that no clarification is needed. The pipeline still advances to Stage 2, where you can review the Master Prompt before proceeding.
Example AI Response
{
"status": "questioning",
"questions": [
"Should the API support pagination, or will all results be returned at once?",
"Do you need authentication on the endpoints, or are they public?",
"What database are you using — PostgreSQL, MySQL, or something else?"
],
"preliminary_understanding": "You want a REST API with CRUD endpoints for a blog platform. The API should handle posts, comments, and user profiles. I need to clarify a few technical details before proceeding."
}
Stage 2: Understanding
Stage 2 is a back-and-forth conversation between you and the AI. The AI asks questions, you answer them, and the AI either asks follow-up questions or confirms its understanding.
How the Conversation Works
- The AI displays its questions (from Gate 1).
- You type your answers in the text area and submit.
- Your answer is added to the communication log — a running transcript of the conversation.
- The AI receives the full communication log (including its own previous questions and your answers) plus your original Stage 1 input and chips.
- The AI either asks new follow-up questions (status:
questioning) or confirms understanding (status:confirmed). - This loop repeats until the AI confirms.
The Master Prompt
When the AI confirms its understanding, it produces the Master Prompt. This is the single most important artifact in the pipeline. The Master Prompt is:
- Self-contained: Any AI could follow it without additional context.
- Precise: No ambiguity remains.
- Complete: Includes all constraints, tools, and behavioral guidelines from your chips.
- Actionable: Clear steps or outcomes are defined.
The Master Prompt is used as the primary reference for every subsequent stage. Think of it as the contract between you and the AI about what will be built.
Moving to Stage 3
Once the AI has confirmed and produced the Master Prompt, you can proceed by clicking Continue. This triggers Gate 2, which locks Stage 2 and advances the pipeline to the research phase.
Gate 2: Understanding → Research
Gate 2 can only fire when Stage 2 has a confirmed status and a Master Prompt. The AI receives the Master Prompt along with your chips and produces:
- Research question: A focused question asking you what context, links, or domain knowledge you can provide.
- Research areas: A list of topics the AI has identified as needing research or investigation.
- Initial observations: Notes on what the AI already knows that's relevant.
Gate 2 locks and the pipeline advances to Stage 3.
Example AI Response
{
"research_question": "Do you have any existing API documentation, database schemas, or style guides I should reference? Links to similar projects would also help.",
"research_areas": [
"REST API best practices for blog platforms",
"Authentication patterns (JWT vs session-based)",
"Database schema design for posts with nested comments",
"Rate limiting strategies for public APIs"
],
"initial_observations": "Based on the Master Prompt, this is a standard CRUD API. The main complexity will be in the comment threading system and the authentication layer."
}
Stage 3: Research
In Stage 3, you provide the AI with any relevant context, links, documentation, or domain knowledge. This is your opportunity to give the AI everything it needs to do the job well.
What to Provide
The AI has already told you what research areas it identified. You can respond with:
- Links to relevant documentation or articles
- Code snippets or schemas from your existing project
- Style guides or design specifications
- Domain-specific knowledge the AI might not have
- Constraints or preferences not covered in your original prompt
If you have nothing to add, you can submit an empty response. The AI will proceed with what it already knows.
The Research Master Guide
After you submit your research input, the AI synthesizes everything — the Master Prompt, the identified research areas, and your input — into a comprehensive Research Master Guide. This guide becomes the technical reference for the implementation phase.
The Research Master Guide typically includes sections like technical approach, best practices, constraints, and domain-specific considerations. It is stored alongside the pipeline and used in both Stage 4 (planning) and Stage 5 (execution).
Moving to Stage 4
Once the Research Master Guide is generated, Stage 3 is marked as complete. You can then trigger Gate 3 to advance to the implementation planning phase.
Gate 3: Research → Implementation
Gate 3 requires Stage 3 to be complete with a Research Master Guide. The AI receives the Master Prompt, the Research Master Guide, and your chips, and produces the implementation plan.
The AI generates:
- Implementation notes: Detailed notes on how the work will be done, key technical decisions, and architecture choices.
- Verification checklist: A structured checklist mapping your requirements, no-go rules, research findings, prerequisites, and AI Mind directives to specific parts of the plan.
- Ready to deploy: Whether the AI is ready to proceed or needs more input from you.
- Question: If not ready, a specific question about what the AI needs from you before it can proceed.
Gate 3 locks and the pipeline advances to Stage 4.
Stage 4: Implementation
Stage 4 presents the AI's implementation plan for your review. This is your last opportunity to provide feedback before the AI executes the plan.
The Implementation Plan
The plan includes detailed implementation notes and a verification checklist. The checklist ensures that:
- Every requirement from the Master Prompt is addressed
- No-Go rules are being respected
- Research findings from the Master Guide are being applied
- Prerequisites are satisfied
- AI Mind behavioral directives are being followed
Providing Feedback
If the plan looks good, you can deploy directly. If you want changes, type your feedback and submit. The AI will revise the implementation plan based on your feedback, regenerating the implementation notes and verification checklist.
You can provide feedback multiple times until you are satisfied with the plan. Each time, the AI rebuilds the plan with your latest feedback included.
Deploying
When you are ready, click Deploy. This triggers Gate 4, which is the final and most intensive AI call. The AI receives:
- The Master Prompt (what to build)
- The Research Master Guide (technical context)
- The implementation plan (how to build it)
- All five categories of chips with full descriptions
Gate 4: Implementation → Output
Gate 4 is the execution step. The AI follows the implementation plan and produces the final deliverable. This is typically the longest-running gate because the AI is generating the complete output.
The AI produces three things:
- Short output: A 2-3 sentence summary of what was created.
- Detailed output: The complete deliverable — all code, content, or artifacts you requested. This is the main result.
- Implementation notes: Notes about decisions made during execution, challenges encountered, or suggestions for improvement.
Gate 4 locks, Stage 4 is marked complete, and the pipeline advances to Stage 5.
Stage 5: Output & PAE
Stage 5 displays the final output and allows you to run the Post AI Evaluation (PAE).
The Output
Stage 5 shows both the short summary and the full detailed output from Gate 4. The detailed output is the main deliverable — copy it, use it, build on it.
PAE: Post AI Evaluation
PAE is a self-evaluation step where the AI critically reviews its own output against your original requirements. Click Generate PAE to trigger this evaluation.
The AI evaluates the output on three metrics, each scored from 0 to 100:
| Metric | What it Measures |
|---|---|
| Accuracy | Does the output correctly implement what was requested? |
| Completeness | Are all requirements addressed? Is anything missing? |
| Confidence | How confident is the AI in the quality of this output? |
The AI also provides written comments with detailed evaluation notes — what was done well, what could be improved, and any concerns or caveats. The PAE is designed to be brutally honest rather than flattering.
After PAE is generated, the pipeline status is set to complete. See the PAE Scoring documentation for more details on interpreting PAE results.
Example PAE Response
{
"accuracy": 92,
"completeness": 87,
"confidence": 85,
"comments": "The API implementation correctly covers all CRUD endpoints for posts and comments. Authentication is properly implemented using JWT. However, the comment threading system only supports one level of nesting rather than unlimited depth as specified. Rate limiting is implemented but uses a simple in-memory store that won't work across multiple server instances."
}
Pipeline Management
You can create, load, duplicate, and delete pipelines from the dashboard.
Creating a Pipeline
Click New Pipeline from the dashboard. You can give it a name and optionally select an AI model. Each pipeline gets a unique ID in the format pl_ followed by 12 random characters and a timestamp (e.g., pl_a8f3k2m9x1b4_lk7r8s).
Loading a Pipeline
Click any pipeline in the dashboard library to open it. The UI restores the pipeline's current state — all stages, prompts, chip selections, AI responses, and outputs are preserved exactly as you left them. You can continue from whatever stage the pipeline was on.
Duplicating a Pipeline
Right-click a pipeline in the library to access the context menu, then select Duplicate. This creates a copy of the pipeline with a new ID, allowing you to explore different approaches from the same starting point.
Deleting a Pipeline
Right-click a pipeline and select Delete, or use the delete option within the pipeline view. Deletion removes the pipeline and all associated stage data permanently. This action cannot be undone.
Pipeline Statuses
| Status | Meaning |
|---|---|
| draft | Pipeline has been created but Stage 1 has not been submitted yet. |
| active | Pipeline is in progress (Stages 2-4). At least one gate has been processed. |
| complete | Pipeline has reached Stage 5 and PAE has been generated. |
API Reference
For developers and advanced users, here are the pipeline API endpoints. All endpoints return JSON with { success: true/false }.
Pipeline CRUD
POST /api/pipeline # Create a new pipeline
GET /api/pipeline # List all pipelines (query: ?status=active&limit=50)
GET /api/pipeline/:id # Get a specific pipeline with all stage data
PATCH /api/pipeline/:id # Update pipeline metadata (name, ai_model)
DELETE /api/pipeline/:id # Delete pipeline and all stage data
Gate Processing
POST /api/pipeline/:id/gate1 # Submit Stage 1 input, process Gate 1
POST /api/pipeline/:id/gate2 # Confirm understanding, process Gate 2
POST /api/pipeline/:id/gate3 # Complete research, process Gate 3
POST /api/pipeline/:id/gate4 # Deploy — execute implementation
Stage Interactions
POST /api/pipeline/:id/stage2/answer # Answer AI questions (Stage 2)
POST /api/pipeline/:id/stage3/answer # Provide research input (Stage 3)
POST /api/pipeline/:id/stage4/answer # Provide implementation feedback (Stage 4)
POST /api/pipeline/:id/pae # Generate PAE evaluation (Stage 5)
Example: Creating and Submitting a Pipeline
# 1. Create a pipeline
curl -X POST http://localhost:8090/api/pipeline \
-H "Content-Type: application/json" \
-d '{"name": "Blog API", "ai_model": "openclaw:main"}'
# 2. Submit Stage 1 (triggers Gate 1)
curl -X POST http://localhost:8090/api/pipeline/pl_abc123_xyz/gate1 \
-H "Content-Type: application/json" \
-d '{
"prompt": "Build a REST API for a blog platform",
"ai_mind": ["PRECISION", "CORRECTNESS FIRST"],
"prerequisites": ["IDENTIFY REQUIREMENTS"],
"implementation_area": ["CODE GEN"],
"tools": ["TERMINAL"],
"no_go_rules": ["NO BREAKING CHANGES"],
"gateway_url": "https://your-gateway.example.com",
"gateway_token": "your-token"
}'
Tips for Better Pipeline Results
Be specific in Stage 1, but don't stress about perfection
The whole point of the pipeline is that Stage 2 will catch what you missed. Write what you know, and let the AI ask about what you forgot.
Choose chips deliberately
Every chip you select affects the AI's behavior at every stage. Don't add chips "just in case." Pick the ones that genuinely apply to your task. A smaller, focused chip set produces better results than selecting everything.
Answer Stage 2 questions thoroughly
The quality of the Master Prompt depends on the quality of your answers. Vague answers lead to vague Master Prompts. Take the time to answer each question completely.
Provide research in Stage 3
Even if the AI says it knows enough, providing additional context almost always improves the output. Paste in documentation, link to examples, or describe domain-specific constraints.
Review the plan in Stage 4 before deploying
The verification checklist is there for a reason. Read through it and make sure every item matches your expectations. Give feedback if anything looks off — it's much cheaper to fix the plan than to redo the entire output.
Use PAE to validate, not just to score
The PAE comments are more valuable than the numbers. Read the AI's self-critique to understand what might need manual review or improvement.