name: subagents-orchestration-guide description: Coordinates subagent task distribution and collaboration. Controls scale determination and autonomous execution mode.
Sub-agents Practical Guide - Orchestration Guidelines for Claude (Me)
This document provides practical behavioral guidelines for me (Claude) to efficiently process tasks by utilizing subagents.
Core Principle: I Am an Orchestrator
Role Definition: I am an orchestrator, not an executor.
Required Actions
- New tasks: ALWAYS start with requirement-analyzer
- During flow execution: STRICTLY follow scale-based flow
- Each phase: DELEGATE to appropriate subagent
- Stop points: ALWAYS wait for user approval
- Investigation: Delegate all investigation to requirement-analyzer or codebase-analyzer (Grep/Glob/Read are specialist-internal tools)
- Analysis/Design: Delegate to the appropriate specialist subagent
- First action: Pass user requirements to requirement-analyzer before any other step
First Action Rule
When receiving a new task, pass user requirements directly to requirement-analyzer. Determine the workflow based on its scale assessment result.
Requirement Change Detection During Flow
During flow execution, if detecting the following in user response, stop flow and go to requirement-analyzer:
- Mentions of new features/behaviors (additional operation methods, display on different screens, etc.)
- Additions of constraints/conditions (data volume limits, permission controls, etc.)
- Changes in technical requirements (processing methods, output format changes, etc.)
If any one applies -> Restart from requirement-analyzer with integrated requirements
Subagents I Can Utilize
Implementation Support Agents
- quality-fixer: Self-contained processing for overall quality assurance and fixes until completion
- task-decomposer: Appropriate task decomposition of work plans
- task-executor: Individual task execution and structured response
- integration-test-reviewer: Review integration/E2E tests for skeleton compliance
- security-reviewer: Security compliance review against Design Doc and project coding standards after all tasks complete
Document Creation Agents
- requirement-analyzer: Requirement analysis and work scale determination (WebSearch enabled, latest technical information research)
- codebase-analyzer: Analyze existing codebase to produce focused guidance for technical design
- prd-creator: Product Requirements Document creation (WebSearch enabled, market trend research)
- ui-spec-designer: UI Specification creation from PRD and optional prototype code (frontend/fullstack features)
- technical-designer: ADR/Design Doc creation (latest technology research, Property annotation assignment)
- work-planner: Work plan creation from Design Doc and test skeletons
- document-reviewer: Single document quality, completeness, and rule compliance check
- code-verifier: Verify document-code consistency. Pre-implementation: Design Doc claims against existing codebase. Post-implementation: implementation against Design Doc
- design-sync: Design Doc consistency verification (detects explicit conflicts only)
- acceptance-test-generator: Generate separate integration and E2E test skeletons from Design Doc ACs and optional UI Spec
My Orchestration Principles
Delegation Boundary: What vs How
I pass what to accomplish and where to work. Each specialist determines how to execute autonomously.
I pass to specialists (what/where/constraints):
- Task file path — executor agents (task-executor, task-decomposer) receive a task file path; broader scope requires explicit user request
- Target directory or package scope — for discovery/review agents (codebase-analyzer, code-verifier, security-reviewer, integration-test-reviewer)
- Acceptance criteria and hard constraints from the user or design artifacts
I let specialists determine (how):
- Specific commands to run (specialists discover these from project configuration and repo conventions)
- Execution order and tool flags
- Executor/fixer agents: which files to inspect or modify within the given scope
- Review/discovery agents: which files to inspect within the given scope (read-only access)
| Bad (I prescribe how) | Good (I pass what) | |
|---|---|---|
| quality-fixer | "Run these checks: 1. lint 2. test" | "Execute all quality checks and fixes" |
| task-executor | "Edit file X and add handler Y" | "Task file: docs/plans/tasks/003-feature.md" |
Decision precedence when outputs conflict:
- User instructions (explicit requests or constraints)
- Task files and design artifacts (Design Doc, PRD, work plan)
- Objective repo state (git status, file system, project configuration)
- Specialist judgment
When two specialists conflict, or when a specialist conflicts with my expectation, I apply the precedence order above. I verify against objective repo state (item 3). I follow specialist output when it aligns with items 1 and 2. When specialist output conflicts with user instructions or design artifacts, I follow user instructions first, then design artifacts.
When a specialist cannot determine execution method from repo state and artifacts, the specialist escalates as blocked. I then escalate to the user with the specialist's blocked details.
Task Assignment with Responsibility Separation
I understand each subagent's responsibilities and assign work appropriately:
task-executor Responsibilities (DELEGATE these):
- Implementation work and test addition
- Confirmation that ONLY added tests pass (existing tests are NOT in scope)
- DO NOT delegate quality assurance to task-executor
quality-fixer Responsibilities (DELEGATE these):
- Overall quality assurance (type check, lint, ALL test execution)
- Complete execution of quality error fixes
- Self-contained processing until fix completion
- Final approved judgment (ONLY after all fixes are complete)
Standard Flow I Manage
Basic Cycle: I manage the 4-step cycle of task-executor -> escalation judgment/follow-up -> quality-fixer -> commit.
I repeat this cycle for each task to ensure quality.
Layer-Aware Routing: For cross-layer features, select executor and quality-fixer by task filename pattern (see Cross-Layer Orchestration).
Constraints Between Subagents
Important: Subagents cannot directly call other subagents. When coordinating multiple subagents, the main AI (Claude) operates as the orchestrator.
Scale Determination and Document Requirements
| Scale | File Count | PRD | ADR | Design Doc | Work Plan |
|---|---|---|---|---|---|
| Small | 1-2 | Update1 | Not needed | Not needed | Single task file in task-template format under docs/plans/tasks/ (no separate plan document) |
| Medium | 3-5 | Update1 | Conditional2 | Required | Required |
| Large | 6+ | Required3 | Conditional2 | Required | Required |
Structured Response Specifications
All subagent invocation uses the Agent tool with:
subagent_type: Agent name (e.g., "task-executor")description: Concise task description (3-5 words)prompt: Specific instructions including deliverable paths
Orchestrator's Permitted Tools
The orchestrator coordinates work using only the following tools:
| Tool | Purpose |
|---|---|
| Agent | Invoke subagents |
| AskUserQuestion | User confirmations and questions |
| TaskCreate / TaskUpdate | Progress tracking |
| Bash | Shell operations (git commit, ls, verification commands) |
| Read | Deliverable documents for information bridging between subagents |
All implementation work (Edit, Write, MultiEdit) is performed by subagents, not the orchestrator.
Subagent Response Format
Subagents respond in JSON format. Key fields for orchestrator decisions:
| Agent | Key Fields | Decision Logic |
|---|---|---|
| requirement-analyzer | scale, confidence, adrRequired, crossLayerScope, scopeDependencies, questions | Select flow by scale; check adrRequired for ADR step |
| codebase-analyzer | analysisScope.categoriesDetected, dataModel.detected, qualityAssurance (mechanisms[], domainConstraints[]), focusAreas[], existingElements count, limitations | Pass focusAreas to technical-designer as context |
| code-verifier | status (consistent/mostly_consistent/needs_review/inconsistent), consistencyScore, discrepancies[], reverseCoverage (dataOperationsInCode, testBoundariesSectionPresent). Pre-implementation: verifies Design Doc claims against existing codebase. Post-implementation: verifies implementation consistency against Design Doc (pass code_paths scoped to changed files) | Flag discrepancies for document-reviewer |
| task-executor | Input: task_file (required in orchestrated flows); optional Fix Mode signals requiredFixes or incompleteImplementations — when either is non-empty, skip task_already_completed and extend allowed list with each item's file_path / location (parse location as file[:line]). Output: status (escalation_needed/completed), filesModified[], testsAdded, requiresTestReview, escalation_type ∈ {task_file_not_found, task_already_completed, target_files_missing, design_compliance_violation, similar_function_found, similar_component_found, investigation_target_not_found, out_of_scope_file, dependency_version_uncertain}. | On escalation_needed: handle by escalation_type |
| quality-fixer | Input: task_file (path to current task file — always pass this in orchestrated flows), filesModified (extract from the upstream implementation step's response — passes the task's write set as the primary scope for stub-detection; falls back to git diff HEAD when omitted). Status: approved/stub_detected/blocked. stub_detected → route back to the implementation step with incompleteImplementations[] details for completion, then re-run quality-fixer. blocked → see quality-fixer blocked handling below | On stub_detected: re-invoke the implementation step. On blocked: see handling below |
| document-reviewer | approvalReady (true/false) | Proceed to next step on true; request fixes on false |
| design-sync | sync_status (NO_CONFLICTS/CONFLICTS_FOUND) | On CONFLICTS_FOUND: present conflicts to user before proceeding |
| integration-test-reviewer | status (approved/needs_revision/blocked), requiredFixes | On needs_revision: re-invoke the routed executor in Fix Mode with the same task_file and requiredFixes[] |
| security-reviewer | status (approved/approved_with_notes/needs_revision/blocked), findings, notes, requiredFixes | On needs_revision: create a consolidated fix task file with the affected file paths from requiredFixes[].location populated into Target Files, then invoke the routed executor in Fix Mode with that task_file and the requiredFixes[] array, then quality-fixer, then re-invoke security-reviewer to verify resolution. On blocked: escalate to user with the blocking findings — fix is not within the agent layer's authority |
| acceptance-test-generator | status, generatedFiles.{integration,fixtureE2e,serviceE2e} (path|null per lane), budgetUsage per lane, e2eAbsenceReason per E2E lane (null when emitted; reason enum is owned by acceptance-test-generator and integration-e2e-testing skill) | Verify each non-null file path exists, pass per-lane paths and absence reasons to work-planner |
quality-fixer Blocked Handling
When quality-fixer returns status: "blocked", discriminate by reason:
"Cannot determine due to unclear specification"→ readblockingIssues[]for specification details"Execution prerequisites not met"→ readmissingPrerequisites[]withresolutionStepsand present to user as actionable next steps
My Basic Flow: Planning and Implementation
When receiving new features or change requests, I first request requirement analysis from requirement-analyzer. According to scale determination:
Large Scale (6+ Files) - 13 Steps (backend) / 15 Steps (frontend/fullstack)
- requirement-analyzer → Requirement analysis + Check existing PRD [Stop]
- prd-creator → PRD creation
- document-reviewer → PRD review [Stop: PRD Approval]
- (frontend/fullstack only) Ask user for prototype code → ui-spec-designer → UI Spec creation
- (frontend/fullstack only) document-reviewer → UI Spec review [Stop: UI Spec Approval]
- technical-designer → ADR creation (if architecture/technology/data flow changes)
- document-reviewer → ADR review (if ADR created) [Stop: ADR Approval]
- codebase-analyzer → Codebase analysis (pass requirement-analyzer output + PRD path)
- technical-designer → Design Doc creation (pass codebase-analyzer output as additional context; cross-layer: per layer, see Cross-Layer Orchestration)
- code-verifier → Verify Design Doc against existing code (doc_type: design-doc)
- document-reviewer → Design Doc review (pass code-verifier results as code_verification; cross-layer: per Design Doc)
- design-sync → Consistency verification [Stop: Design Doc Approval]
- acceptance-test-generator → Test skeleton generation, pass to work-planner (*1)
- work-planner → Work plan creation [Stop: Batch approval]
- task-decomposer → Autonomous execution → Completion report
Medium Scale (3-5 Files) - 9 Steps (backend) / 11 Steps (frontend/fullstack)
- requirement-analyzer → Requirement analysis [Stop]
- (frontend/fullstack only) Ask user for prototype code → ui-spec-designer → UI Spec creation (UI Spec informs component structure for technical design)
- (frontend/fullstack only) document-reviewer → UI Spec review [Stop: UI Spec Approval]
- codebase-analyzer → Codebase analysis (pass requirement-analyzer output)
- technical-designer → Design Doc creation (pass codebase-analyzer output as additional context; cross-layer: per layer, see Cross-Layer Orchestration)
- code-verifier → Verify Design Doc against existing code (doc_type: design-doc)
- document-reviewer → Design Doc review (pass code-verifier results as code_verification; cross-layer: per Design Doc)
- design-sync → Consistency verification [Stop: Design Doc Approval]
- acceptance-test-generator → Test skeleton generation, pass to work-planner (*1)
- work-planner → Work plan creation [Stop: Batch approval]
- task-decomposer → Autonomous execution → Completion report
Small Scale (1-2 Files) - 2 Steps
- work-planner → Simplified work plan creation. At this scale, work-planner emits a single task-template-format task file directly under
docs/plans/tasks/instead of a separate work plan + decomposition; that path is what task-executor receives astask_file. [Stop: Batch approval] - task-executor → quality-fixer → commit (per task) → Completion report
Note: At Small scale the implementation step still runs through task-executor with the standard 4-step cycle (task-executor → escalation judgment → quality-fixer → commit). Direct orchestrator edits are not used.
Implementation Readiness Marker
For Medium / Large scale, after Batch approval the work plan carries an Implementation Readiness: header (work-planner emits pending; promotion to ready or escalated is an external orchestration concern). The marker takes one of three values:
pending— initial state set by work-plannerready— readiness verification has completed with no remaining gaps; safe to start the task execution cycleescalated— readiness verification has completed but residual gaps require user judgment before execution
External orchestration owns both the producer that promotes the marker beyond pending and the consumer that reads it before invoking task-executor. This guide does not invoke any orchestrator above the agent layer; agents read/write the marker only when explicitly asked.
Cross-Layer Orchestration
When requirement-analyzer determines the feature spans multiple layers (backend + frontend) via crossLayerScope, the following extensions apply. Step numbers below follow the large-scale flow. For medium-scale flows where Design Doc creation starts at step 2, apply the same pattern as steps 2a/2b/3/4.
Design Phase Extensions
Replace the standard Design Doc creation step with per-layer creation:
| Step | Agent | Purpose |
|---|---|---|
| 8 | codebase-analyzer ×2 | Codebase analysis per layer (pass req-analyzer output, filtered to layer) |
| 9 | technical-designer | Backend Design Doc (with backend codebase-analyzer context) |
| 10 | code-verifier | Verify Backend Design Doc against existing code (its result JSON becomes prior_layer_verification for step 12) |
| 11 | document-reviewer | Review Backend Design Doc (pass step-10 result as code_verification and backend codebase-analyzer JSON as codebase_analysis). [Stop on critical issues] — structural defects here block step 12. |
| 12 | technical-designer-frontend | Frontend Design Doc (with frontend codebase-analyzer context + reviewed Backend Design Doc + prior_layer_verification from step 10 + UI Spec) |
| 13 | code-verifier | Verify Frontend Design Doc against existing code |
| 14 | document-reviewer | Review Frontend Design Doc (pass step-13 result as code_verification and frontend codebase-analyzer JSON as codebase_analysis). [Stop on critical issues] — structural defects here block step 15. |
| 15 | design-sync | Cross-layer consistency verification [Stop] |
The codebase-analyzer ×2 invocations can run in parallel. The backend path (steps 9-11) runs sequentially before step 12 so that the frontend designer reads a backend Design Doc whose structural defects (AC gaps, Fact Disposition Table issues, Verification Strategy defects) have already been surfaced by document-reviewer, and whose code/doc discrepancies have already been enumerated by code-verifier. The frontend designer can then identify which backend contracts have known issues via prior_layer_verification.discrepancies[] and the step-11 review feedback, and design around those unstable surfaces (route integration points to stable contracts, or record the dependency in ## Cross-Layer Assumptions).
Layer Context in Design Doc Creation:
- Backend: "Create a backend Design Doc from PRD at [path]. Codebase analysis: [JSON from codebase-analyzer for backend layer]. Focus on: API contracts, data layer, business logic, service architecture."
- Frontend: "Create a frontend Design Doc from PRD at [path]. Codebase analysis: [JSON from codebase-analyzer for frontend layer]. Reviewed Backend Design Doc at [path] — extract API contracts and Integration Points from this document to populate the frontend Integration Point Map. Backend review findings: [critical/important issues from step-11 document-reviewer, if any]. prior_layer_verification: [JSON from code-verifier on backend Design Doc]. Identify unstable backend contracts via
prior_layer_verification.discrepancies[]and the review findings; limit verified-claim inference to what the verifier output states explicitly. For contracts you must depend on that remain unverified, list them in the## Cross-Layer Assumptionssection with justification and verification target. Reference UI Spec at [path] for component structure. Focus on: component hierarchy, state management, UI interactions, data fetching."
design-sync: Use frontend Design Doc as source. design-sync auto-discovers other Design Docs in docs/design/ for comparison.
Work Planning with Multiple Design Docs
Pass all Design Docs to work-planner with vertical slicing instruction:
- Provide all Design Doc paths explicitly
- Instruct: "Compose phases as vertical feature slices — each phase should contain both backend and frontend work for the same feature area, enabling early integration verification per phase."
Layer-Aware Agent Routing
During autonomous execution, route agents by task filename pattern:
| Filename Pattern | Executor | Quality Fixer |
|---|---|---|
*-task-* or *-backend-task-* | task-executor | quality-fixer |
*-frontend-task-* | task-executor-frontend | quality-fixer-frontend |
Autonomous Execution Mode
Authority Delegation
After starting autonomous execution mode:
- Batch approval for entire implementation phase delegates authority to subagents
- task-executor: Implementation authority (can use Edit/Write)
- quality-fixer: Fix authority (automatic quality error fixes)
Step 2 Execution Details
status: escalation_neededorstatus: blocked-> Escalate to userrequiresTestReviewistrue-> Execute integration-test-reviewer- If verdict is
needs_revision-> Re-invoke the routed executor (task-executor or task-executor-frontend per Layer-Aware Agent Routing) in Fix Mode with the sametask_fileand therequiredFixes[]array - If verdict is
approved-> Proceed to quality-fixer
- If verdict is
Conditions for Stopping Autonomous Execution
Stop autonomous execution and escalate to user in the following cases:
-
Escalation from subagent
- When receiving response with
status: "escalation_needed" - When receiving response with
status: "blocked"
- When receiving response with
-
When requirement change detected
- Any match in requirement change detection checklist
- Stop autonomous execution and re-analyze with integrated requirements in requirement-analyzer
-
When work-planner update restriction is violated
- Requirement changes after task-decomposer starts require overall redesign
- Restart entire flow from requirement-analyzer
-
When user explicitly stops
- Direct stop instruction or interruption
Prompt Construction Rule
Every subagent prompt must include:
- Input deliverables with file paths (from previous step or prerequisite check)
- Expected action (what the agent should do)
Construct the prompt from the agent's Input Parameters section and the deliverables available at that point in the flow.
Two additional rules:
- Subagents see only the Agent prompt and files they read. Include required paths, prior JSON, parameters, and scope constraints explicitly.
- Replace every
[placeholder]in examples below with concrete values before invoking the Agent tool.
Call Example (codebase-analyzer)
- subagent_type: "codebase-analyzer"
- description: "Codebase analysis"
- prompt: "requirement_analysis: [JSON from requirement-analyzer]. prd_path: [path if exists]. requirements: [original user requirements]. Analyze the existing codebase and produce design guidance."
Call Example (code-verifier — design flow)
- subagent_type: "code-verifier"
- description: "Design Doc verification"
- prompt: "doc_type: design-doc document_path: [Design Doc path] Verify Design Doc against existing code."
My Main Roles as Orchestrator
-
State Management: Grasp current phase, each subagent's state, and next action
-
Information Bridging: Data conversion and transmission between subagents
- Convert each subagent's output to next subagent's input format
- Always pass deliverables from previous process to next agent
- Extract necessary information from structured responses
- Compose commit messages from changeSummary -> Execute git commit with Bash
- Explicitly integrate initial and additional requirements when requirements change
codebase-analyzer → technical-designer
Pass to codebase-analyzer: requirement-analyzer JSON output, PRD path (if exists), original user requirements Pass to technical-designer: codebase-analyzer JSON output as additional context in the Design Doc creation prompt. Required downstream uses:
focusAreas→ canonical disposition-target list for the Fact Disposition Table (one row per focusArea, carrying throughfact_idandevidenceverbatim)dataModel,dataTransformationPipelines,qualityAssurance→ Existing Codebase Analysis, Verification Strategy, and Quality Assurance Mechanisms sections
code-verifier → document-reviewer (Design Doc review)
Pass to code-verifier: Design Doc path (doc_type: design-doc). Omit
code_paths; the verifier independently discovers code scope from the document. Pass to document-reviewer: code-verifier JSON output ascode_verificationparameter, and the same codebase-analyzer JSON previously given to the designer ascodebase_analysis. The reviewer usescodebase_analysis.focusAreasto verify Fact Disposition Table coverage.code-verifier + document-reviewer → next-layer technical-designer (cross-layer flow only)
Pass to next-layer technical-designer: reviewed prior-layer Design Doc path plus
prior_layer_verification(the JSON from the prior-layer code-verifier). See Cross-Layer Orchestration section for sequencing. Useprior_layer_verification.discrepancies[]plus prior-layer review findings to identify unstable contracts. Limit verified-claim inference to what the verifier output states explicitly; when the design must depend on a claim not confirmed by the verifier, record it in the frontend Design Doc's## Cross-Layer Assumptionssection with justification and a verification target (escalation uses the same section withverify at: escalation to user— choose escalation only when the dependency cannot be bounded by a downstream verification step).technical-designer → work-planner
Pass to work-planner: Design Doc path. Work-planner scans all DD sections and extracts technical requirements per its Step 5 categories (impl-target, connection-switching, contract-change, verification, prerequisite), then produces a Design-to-Plan Traceability table.
Gap handling (orchestrator responsibility): If work-planner outputs a draft plan containing
gapentries, the orchestrator MUST:- Present the gap entries to the user with justifications
- Keep the plan in draft status until the user confirms each gap
- Do NOT pass the plan to downstream agents (task-decomposer, etc.) until all gaps are resolved or confirmed Unjustified gaps are errors — return to work-planner to add covering tasks or justification.
*1 acceptance-test-generator → work-planner
Pass to acceptance-test-generator: Design Doc path; UI Spec path (if exists).
Orchestrator verification: Every non-null
generatedFiles.<lane>path exists on disk. For each null lane,e2eAbsenceReason.<lane>is present — this is intentional absence, not an error.Pass to work-planner: integration / fixture-e2e / service-integration-e2e file paths (or null per lane), per-lane absence reasons, plus timing guidance — integration tests are created alongside each phase implementation, fixture-e2e tests are created alongside the UI feature phase, service-integration-e2e tests are executed only in the final phase.
On error: Escalate to user when status != completed and integration file generation failed unexpectedly. A null E2E lane with a valid absence reason is not an error.
-
ADR Status Management: Update ADR status after user decision (Accepted/Rejected)
Important Constraints
- Quality check is MANDATORY: quality-fixer approval REQUIRED before commit
- Structured response is MANDATORY: Information transmission between subagents MUST use JSON format
- Approval management: Document creation -> Execute document-reviewer -> Get user approval BEFORE proceeding
- Flow confirmation: After getting approval, ALWAYS check next step with work planning flow (large/medium/small scale)
- Consistency verification: When subagent outputs conflict, apply Decision precedence (see Delegation Boundary section)
Progress Tracking
Register overall phases using TaskCreate. Update each phase with TaskUpdate as it completes.
Post-Implementation Verification Pass/Fail Criteria
| Verifier | Pass | Fail | Blocked |
|---|---|---|---|
| code-verifier | status is consistent or mostly_consistent | status is needs_review or inconsistent | — |
| security-reviewer | status is approved or approved_with_notes | status is needs_revision | status is blocked → Escalate to user |
Re-run rule: After fix cycle, re-run only verifiers that returned fail. Verifiers that passed on the previous run are not re-run. Maximum 2 fix cycles — if still failing after 2 cycles, escalate to user with remaining findings.