name: research description: "Deep research skill for technical topics. Invoke when user asks to research a technology, library, architecture pattern, or solution approach. Uses iterative multi-source search: each round informs the next search query. Outputs structured report with findings, analysis, trade-offs, and recommendation. Modes: shallow (quick overview), standard (3-5 sources), deep (iterative until convergence)." allowed-tools:
- Bash
- WebSearch
- WebFetch
- Read
- Grep metadata: token-cost-tier: instruction token-cost-estimate: 5000
/research — Iterative Multi-Source Technical Research
Usage
/research <topic> [--depth shallow|standard|deep] [--focus <angle>] [--output report|bullets|compare]
<topic>— what to research (e.g., "FastAPI vs Django for async APIs", "Redis pub/sub patterns")--depth shallow— 1 round, 2-3 sources, ~1500 tok--depth standard— 2 rounds, 4-6 sources, ~5000 tok (default)--depth deep— iterative until convergence, 6-10+ sources, ~12000 tok--focus <angle>— lens to apply: performance | security | scalability | implementation | comparison--output report— full structured report (default)--output bullets— concise bullet list--output compare— comparison table
Core Methodology (from iterative research principles)
Each round:
- Search → read top results
- Extract key facts, open questions, contradictions
- Identify what you still don't know
- Form next-round queries targeting the gaps
- Repeat until no new material information is found (deep mode) or round limit reached
Source hierarchy (prefer in order):
- Official documentation (highest trust)
- GitHub issues/PRs (real-world problems)
- Engineering blogs from companies that use the tech at scale
- Benchmarks with methodology disclosed
- Community discussion (Stack Overflow, Reddit, Discord) — lowest trust, only for "gotchas"
Step 0 — Announce Plan
Research Plan: <topic>
Depth: <shallow|standard|deep>
Round 1: Initial survey (~N sources)
Round 2: Gap-filling (~N sources) [if standard/deep]
...
Estimated: ~<N> tokens
Step 1 — Local Scan First (free, fast)
Before searching the web, check if we already have relevant indexed knowledge:
# Check if library is already indexed
ls .claude/lib-index/ 2>/dev/null | grep -i "<topic-keyword>"
# Check project knowledge for relevant context
grep -rl "<topic-keyword>" .claude/knowledge/ 2>/dev/null | head -5
If found: load the index first, note what's already known, then search for gaps only.
Step 2 — Round 1: Initial Survey
Form 2-3 targeted search queries. Prefer specific queries over broad ones.
Query construction rules:
- Include version/year if relevant:
"FastAPI 0.115 async patterns 2024" - Include context:
"Redis pub/sub vs Kafka when to use" - Include problem frame:
"PostgreSQL connection pooling production gotchas"
For each source found:
- Fetch the page
- Extract: core concept, key facts, caveats, version requirements
- Note source type (official docs / engineering blog / benchmark / community)
For code-related research: Also check GitHub:
# Search for real-world usage patterns
gh search repos "<topic> language:typescript stars:>100" --limit 5 2>/dev/null | head -10
# Check for known issues
gh search issues "<topic> is:open label:bug" --limit 5 2>/dev/null | head -10
Extract from each source:
- Main recommendation / approach
- Trade-offs mentioned
- Caveats / gotchas
- Contradictions with other sources
Step 3 — Gap Analysis (between rounds)
After Round 1, identify:
## Gap Analysis after Round 1
Known:
- <fact 1 with source>
- <fact 2 with source>
Unknown / conflicting:
- <open question 1> → will search in Round 2
- <conflicting claim between source A and B> → needs resolution
Reliability notes:
- <source X> claims Y but no benchmark cited — unverified
Stop at shallow depth. Continue to Round 2 at standard/deep.
Step 4 — Round 2+: Gap-Filling
Form new queries specifically targeting the gaps identified.
Focus on:
- Resolving contradictions (find the canonical source)
- Finding real-world scale examples for unverified claims
- Locating benchmarks or case studies
Repeat rounds until:
- No meaningful new information in last round (deep mode convergence)
- Round limit reached (standard: 2 rounds, deep: up to 4 rounds)
Step 5 — Synthesis
If --output report (default):
# Research Report: <topic>
Date: <date> | Depth: <depth> | Sources: <N>
## TL;DR (3 sentences max)
<core finding>
## Key Findings
### Finding 1: <name>
<explanation>
Source: [<name>](<url>) — <credibility: official|blog|benchmark|community>
### Finding 2: <name>
...
## Trade-offs
| Aspect | Option A | Option B |
|--------|----------|----------|
| Performance | ... | ... |
| Scalability | ... | ... |
| Complexity | ... | ... |
## Gotchas & Caveats
- <gotcha 1> (Source: X)
- <gotcha 2> (Source: Y)
## Recommendation
For <your context>, use <X> because <reason>.
If <alternative condition>, consider <Y> instead because <reason>.
## What This Research Can't Answer
- <question that needs real-world testing>
- <version-specific behavior not found in docs>
## Sources
1. [<title>](<url>) — <type> — fetched <date>
...
If --output bullets:
## <topic> — Research Summary
- <key fact 1>
- <key fact 2>
- **Recommendation:** <one line>
- **Watch out:** <main gotcha>
If --output compare:
## <option A> vs <option B>
| Criterion | <A> | <B> | Winner |
|-----------|-----|-----|--------|
...
Honesty Rules
- Never state something as fact without a source. If you infer, say "likely" or "based on X".
- Never invent API signatures or method names. If you're not sure an API exists, say so.
- Contradict official docs if real-world evidence is stronger, but cite both.
- Flag version-specific information: "As of v2.3, this behavior changed"
- "What this research can't answer" section is mandatory — it prevents overconfidence.
Example Queries
/research "FastAPI vs Django REST for high-throughput async API" --depth standard --focus performance
/research "Redis pub/sub patterns for real-time notifications" --depth deep --output compare
/research "PostgreSQL row-level security with multi-tenant SaaS" --depth standard --focus security
/research "React Server Components vs client-side fetching tradeoffs" --depth shallow --output bullets