Agent SDK Subagents
Orchestrate multiple specialized agents for complex workflows.
Why Subagents?
- Context Isolation - Separate context prevents information overload
- Parallelization - Multiple agents work concurrently
- Specialization - Each agent has focused instructions and tools
- Tool Restrictions - Limit what each agent can do
Defining Subagents (Programmatic)
import { query } from "@anthropic-ai/claude-agent-sdk";
for await (const message of query({
prompt: "Review this codebase for security issues and performance problems",
options: {
agents: {
"security-reviewer": {
description: "Expert security code reviewer",
prompt: `You are a security specialist. Focus on:
- SQL injection vulnerabilities
- XSS attack vectors
- Authentication/authorization flaws
- Sensitive data exposure
- Input validation issues`,
tools: ["Read", "Grep", "Glob"], // Read-only
model: "sonnet"
},
"performance-analyst": {
description: "Performance optimization expert",
prompt: `You are a performance specialist. Focus on:
- N+1 query patterns
- Memory leaks
- Inefficient algorithms
- Caching opportunities
- Async/await bottlenecks`,
tools: ["Read", "Grep", "Glob"],
model: "sonnet"
}
}
}
})) {
// Main agent delegates to subagents automatically
}
Subagent Configuration
interface SubagentConfig {
description: string; // When to use this agent
prompt: string; // System instructions
tools?: string[]; // Allowed tools (default: all)
model?: string; // Model override (sonnet, opus, haiku)
}
Filesystem-Based Agents
Create agents in .claude/agents/:
.claude/agents/
├── code-reviewer.md
├── test-writer.md
└── documentation/
└── AGENT.md
Example: .claude/agents/code-reviewer.md
---
name: code-reviewer
description: Reviews code for quality, security, and best practices
allowed-tools: Read, Grep, Glob
model: sonnet
---
# Code Review Agent
You are an expert code reviewer. When reviewing code:
## Focus Areas
1. **Code Quality**
- Clear naming conventions
- Proper error handling
- DRY principles
2. **Security**
- Input validation
- Authentication checks
- Data sanitization
3. **Performance**
- Algorithmic efficiency
- Resource management
- Caching opportunities
## Output Format
For each issue found:
- File and line number
- Issue description
- Severity (critical/warning/info)
- Suggested fix
Parallel Execution
Subagents run concurrently when delegated:
// Main agent prompt triggers parallel execution
const prompt = `
Analyze this codebase:
1. Have the security-reviewer check for vulnerabilities
2. Have the performance-analyst identify bottlenecks
3. Summarize findings from both
Run the reviews in parallel for efficiency.
`;
Tool Restrictions
Limit what subagents can do:
agents: {
// Read-only analyst
"analyzer": {
description: "Analyzes code without modifications",
prompt: "...",
tools: ["Read", "Grep", "Glob"] // No Write, Edit, Bash
},
// Full-access fixer
"fixer": {
description: "Fixes identified issues",
prompt: "...",
tools: ["Read", "Write", "Edit", "Bash"]
}
}
Model Selection
Use different models for different tasks:
agents: {
// Complex reasoning
"architect": {
description: "Designs system architecture",
prompt: "...",
model: "opus" // Most capable
},
// Standard tasks
"implementer": {
description: "Implements features",
prompt: "...",
model: "sonnet" // Balanced
},
// Simple/fast tasks
"formatter": {
description: "Formats and lints code",
prompt: "...",
model: "haiku" // Fast and cheap
}
}
Subagent Communication Pattern
Main agent orchestrates, subagents report back:
const prompt = `
You are coordinating a code review. Use these agents:
1. **security-reviewer**: Check for security vulnerabilities
2. **test-writer**: Write missing unit tests
3. **documentation**: Update README and docstrings
Process:
1. First, have security-reviewer scan for issues
2. Have test-writer add tests for critical paths
3. Have documentation update based on changes
4. Compile a final summary report
`;
Python Subagents
from claude_agent_sdk import query, ClaudeAgentOptions
options = ClaudeAgentOptions(
agents={
"researcher": {
"description": "Researches topics and gathers information",
"prompt": "You are a research specialist...",
"tools": ["Read", "WebFetch", "WebSearch"],
"model": "sonnet"
},
"writer": {
"description": "Writes content based on research",
"prompt": "You are a technical writer...",
"tools": ["Read", "Write"],
"model": "sonnet"
}
}
)
async for message in query(
prompt="Research and write a guide about async Python",
options=options
):
print(message)
Best Practices
- Clear descriptions - Help main agent know when to delegate
- Focused prompts - Each agent does one thing well
- Minimal tools - Only grant necessary permissions
- Model matching - Use appropriate model for task complexity
- Parallel when possible - Independent tasks run concurrently
- Sequential when needed - Chain dependent tasks
Example: Full Review Pipeline
const options = {
agents: {
"type-checker": {
description: "Runs TypeScript type checking",
prompt: "Run tsc and report type errors",
tools: ["Bash", "Read"],
model: "haiku"
},
"linter": {
description: "Runs ESLint and reports issues",
prompt: "Run eslint and categorize issues",
tools: ["Bash", "Read"],
model: "haiku"
},
"test-runner": {
description: "Runs tests and analyzes failures",
prompt: "Run tests, analyze failures, suggest fixes",
tools: ["Bash", "Read"],
model: "sonnet"
},
"security-scanner": {
description: "Scans for security vulnerabilities",
prompt: "Check for OWASP Top 10 vulnerabilities",
tools: ["Read", "Grep"],
model: "sonnet"
}
}
};
// Main agent coordinates all reviews
const prompt = `
Run a comprehensive code review:
1. Type check the codebase (type-checker)
2. Run linting (linter)
3. Execute tests (test-runner)
4. Security scan (security-scanner)
Run steps 1-2 in parallel (independent).
Then run 3-4 in parallel.
Compile all findings into a prioritized report.
`;