name: feasibility-study description: > Comprehensive product feasibility study with effort and cost estimation. Analyzes technical, economic, market, operational, and schedule feasibility using the TELOS framework. Produces a structured report with go/no-go recommendation and confidence levels. Use when asked to "feasibility study", "estimate cost to build", "is this worth building", "effort estimate", "cost-benefit analysis", "should we build this", or "how much would it cost to build". Proactively suggest when a user describes a product idea and asks whether it is viable, how long it would take, or how much it would cost. user-invocable: true argument-hint: <product-or-url> [--depth quick|standard|deep]
Feasibility Study
Perform a comprehensive product feasibility study. Analyze the product across five dimensions (TELOS), estimate effort and cost, assess risks, and produce a structured report with a go/no-go recommendation.
Depth Modes
Parse $ARGUMENTS for --depth flag. Default to standard if not specified.
| Mode | Scope | Estimation Method |
|---|---|---|
| quick | Technical + Economic only, skip research | T-shirt sizing |
| standard | Full TELOS analysis with research | Function points |
| deep | Full TELOS + detailed TCO + market sizing | COCOMO II + full TCO |
Phase 0: Parse Input and Configure
- Parse
$ARGUMENTSto extract:- Product name or URL (first positional argument)
- Depth mode (
--depth quick|standard|deep, default:standard)
- If a URL is provided, fetch it with WebFetch to understand the product
- If a product description is provided, confirm your understanding
- Print:
Starting feasibility study: {product} | Depth: {mode}
Phase 1: Discovery
Ask the user 3-5 targeted questions to understand the opportunity. Use AskUserQuestion for each. Smart-skip: if the URL/description already answers a question, skip it and state what you inferred.
Required context (must have all before proceeding):
- Product: What is the product? What does it do? (pre-fill from URL analysis)
- Users: Who is the target user? What problem does it solve for them?
- Business model: How will it make money? (SaaS, marketplace, one-time, etc.)
- Constraints: Budget range, timeline expectations, team size/skills available?
- Success metrics: What does success look like? Scale targets?
After gathering answers, print a brief summary:
## Understanding
- Product: ...
- Target users: ...
- Business model: ...
- Constraints: ...
- Success metrics: ...
Gate 1: Sufficient Context
Verify you have minimum viable context to proceed. Checklist:
- Product is clearly defined
- Target user is identified
- Business model is understood (or explicitly "to be determined")
- At least one constraint is known
If any critical gap exists, ask ONE targeted follow-up question. Do not proceed until the checklist passes.
Phase 2: Research
Skip this phase for
--depth quick
Use WebSearch to research:
- Competitive landscape: Find 3-5 direct competitors or similar products
- Market signals: Market size indicators, growth trends, recent funding
- Technical precedents: Open source projects, published architectures, known challenges
For each competitor found, note: name, URL, pricing, key differentiators.
If WebSearch is unavailable, note the limitation and proceed with the information provided by the user. Do NOT block on research availability.
Print a brief research summary before proceeding.
Phase 3: TELOS Analysis
Analyze each dimension. Read the corresponding reference file on demand. Score each dimension 1-5 and provide evidence for the score.
3a. Technical Feasibility
Read references/technical-analysis.md for the framework.
Analyze:
- Architecture complexity: Classify as Simple / Moderate / Complex / Extreme
- Core components: List major technical components needed
- Technology stack: Recommend stack, flag any unproven technologies
- Integration complexity: External APIs, data sources, third-party services
- Scalability path: What changes at 10x, 100x scale
- Technical unknowns: What needs prototyping or proof-of-concept
Output: Technical Feasibility Score (1-5) with one-paragraph rationale.
3b. Economic Feasibility
Read references/cost-estimation.md for frameworks and rate cards.
For quick mode: Use T-shirt sizing only.
For standard mode: Use function point estimation.
For deep mode: Run scripts/estimate.py with COCOMO parameters.
Estimate three separate dimensions:
- Effort (person-months): Total work regardless of who does it
- Calendar time (months): Wall-clock time based on team size
- Cash cost ($): Actual money spent — founder sweat equity = $0
Break down by:
- Development effort: By component/phase, with AI adjustment
- Team composition: Founders (sweat equity) vs. contractors (cash cost)
- Infrastructure cost: Monthly/annual hosting, third-party services
- Operating cost: Year 1, Year 2, Year 3 projections
- Hidden costs: Walk through the hidden costs checklist (mandatory)
- Revenue projection: Based on business model and market size
- ROI analysis: Payback period, 3-year ROI
Script supports team configuration:
# Solo founder + Claude Code, no cash cost
echo '{"operation": "function_points", "unadjusted_fp": 200, "ai_level": "very_high", "contractor_count": 0}' | python3 ${CLAUDE_SKILL_DIR}/scripts/estimate.py
# Founder + 1 contractor at $10K/mo
echo '{"operation": "function_points", "unadjusted_fp": 200, "ai_level": "high", "contractor_count": 1, "contractor_rate": 10000}' | python3 ${CLAUDE_SKILL_DIR}/scripts/estimate.py
All estimation script operations support AI productivity adjustment. Always ask the user about their AI tooling and apply the appropriate multiplier:
# With AI assistance level (named)
echo '{"operation": "cocomo", "kloc": <kloc>, "mode": "semi-detached", "ai_level": "high"}' | python3 ${CLAUDE_SKILL_DIR}/scripts/estimate.py
# With custom AI multiplier (0.0-1.0)
echo '{"operation": "function_points", "unadjusted_fp": 320, "complexity": "complex", "ai_multiplier": 0.4}' | python3 ${CLAUDE_SKILL_DIR}/scripts/estimate.py
AI levels: none (0%), low (20%), moderate (35%), high (50%), very_high (65%)
Output: Economic Feasibility Score (1-5) with cost summary table.
3c. Market Feasibility
For quick mode: Brief competitive check only, no deep market sizing.
Read references/market-analysis.md for the framework.
Analyze:
- Market size: TAM / SAM / SOM estimates
- Competitive landscape: Position vs. competitors found in Phase 2
- Differentiation: What makes this different? Is it defensible?
- Moat assessment: None / Shallow / Deep — with justification
- Timing: Too early / Right time / Too late
Output: Market Feasibility Score (1-5) with positioning summary.
3d. Operational Feasibility
Analyze:
- Team requirements: Roles, skills, hiring difficulty
- Process requirements: Development methodology, release cadence
- Support model: Customer support needs, SLA expectations
- Organizational readiness: Does the team have the skills? What gaps exist?
Output: Operational Feasibility Score (1-5) with team summary.
3e. Schedule Feasibility
Analyze:
- Phase breakdown: Discovery → MVP → Beta → Launch → Scale
- Milestone timeline: Calendar estimates for each phase
- Critical path: What must happen sequentially vs. in parallel
- Dependencies: External dependencies, blockers, long-lead items
- Schedule risks: What could cause delays
Output: Schedule Feasibility Score (1-5) with timeline table.
3f. Automation Feasibility
Read references/automation-analysis.md for the framework.
Analyze each operational function of the product for automation potential:
- Component automation audit: For every major component/function, classify as Fully Automatable / Partially Automatable / Requires Human
- Data pipeline automation: Can data collection, processing, and delivery run unattended? What breaks require human intervention?
- Quality assurance automation: Can correctness be validated programmatically (self-validating loops) or does it need human judgment?
- Customer lifecycle automation: Onboarding, billing, support — can the full customer journey be self-serve?
- Maintenance automation: Can ongoing upkeep (updates, monitoring, fixes) be handled by scheduled jobs + AI, or does it need manual attention?
- Human intervention points: List every remaining point where a human must intervene. For each, assess if AI can eventually replace it.
- Solo operator viability: Can one person run this product at scale with AI assistance, or does growth require proportional headcount?
Output: Automation Feasibility Score (1-5) with component automation table.
Scoring guide:
- 5: Fully automatable — runs unattended, self-heals, scales without headcount
- 4: Near-full automation — occasional human check-ins (weekly), AI handles rest
- 3: Partially automated — regular human tasks (daily), but core ops are automated
- 2: Heavily manual — most operations require human involvement
- 1: Not automatable — human-intensive at every step
Gate 2: Analysis Complete
Verify all dimensions are scored:
- Technical score assigned (1-5) with evidence
- Economic score assigned (1-5) with cost estimates
- Market score assigned (1-5) — skip detailed for quick mode
- Operational score assigned (1-5)
- Schedule score assigned (1-5) with timeline
- Automation score assigned (1-5) with component table
- Confidence level noted for each score (Low / Medium / High)
Do not proceed until all scores are assigned.
Phase 4: Risk Assessment
Skip this phase for
--depth quick
Read references/risk-assessment.md for the framework.
- Identify the top 5-10 risks across all TELOS dimensions
- Score each risk: Likelihood (1-5) x Impact (1-5) = Risk Score
- Classify: Critical (20-25) / High (12-19) / Medium (6-11) / Low (1-5)
- For every Critical and High risk, propose a specific mitigation strategy
- Check against the Red Flags checklist
Output: Risk matrix table sorted by risk score (highest first).
Phase 5: Synthesis and Recommendation
-
Calculate weighted overall score:
- Technical: 20% weight
- Economic: 20% weight
- Market: 15% weight
- Operational: 10% weight
- Schedule: 10% weight
- Automation: 25% weight
-
Map to recommendation:
- Score >= 3.5: GO — proceed with development
- Score 2.5-3.49: CONDITIONAL GO — proceed if conditions are met
- Score < 2.5: NO-GO — do not proceed as planned
-
State:
- Overall score and recommendation
- Confidence level (Low / Medium / High)
- Top 3 key assumptions that could change the recommendation
- 3-5 concrete recommended next steps
Phase 6: Report Presentation and Discussion
Read references/report-template.md for the full output format.
Step 1: Save the full report
Generate the complete feasibility report and save to:
feasibility-report-{product-name}-{YYYY-MM-DD}.md in the current working directory.
Step 2: Present inline summary with scorecard
Print the following formatted summary directly to the user:
---
## Feasibility Study: {Product Name}
### Verdict: {GO | NO-GO | CONDITIONAL GO}
| Dimension | Score | Confidence | Key Finding |
|--------------|-------|------------|------------------------------------|
| Technical | {X}/5 | {L/M/H} | {one-line finding} |
| Economic | {X}/5 | {L/M/H} | {one-line finding} |
| Market | {X}/5 | {L/M/H} | {one-line finding} |
| Operational | {X}/5 | {L/M/H} | {one-line finding} |
| Schedule | {X}/5 | {L/M/H} | {one-line finding} |
| Automation | {X}/5 | {L/M/H} | {one-line finding} |
| **Overall** | **{X.X}/5** | | |
### Key Numbers
- **Effort:** {X} person-months ({AI level}, {reduction}% AI reduction)
- **Calendar time:** {X} months (with {N} developer(s))
- **Cash cost:** ${X} (development labor + tools, excludes infra)
- **Annual operating cost:** ${range} (Year 1)
- **Payback period:** {range}
Note: Effort = total work. Calendar time = wall-clock months (depends on
team size). Cash cost = actual money spent (sweat equity = $0).
### Top 3 Risks
1. {risk} (Score: {X}) — {mitigation}
2. {risk} (Score: {X}) — {mitigation}
3. {risk} (Score: {X}) — {mitigation}
---
Step 3: Walk through key discussion points
After presenting the summary, walk the user through each key point interactively. Use AskUserQuestion to guide the discussion:
Ask: "Here are the key points I'd like to discuss. What would you like to dig into first?"
Options:
- Technical deep-dive — Architecture, stack choices, and the hardest engineering challenges
- Cost breakdown — Where the money goes, hidden costs, and how to optimize
- Competitive strategy — How to differentiate from existing players
- Risk mitigation — The highest-risk items and what to do about them
- Next steps — The recommended action plan to move forward
- Full report — Show the complete detailed report
Step 4: Discuss the selected topic
For whichever topic the user selects, expand on that section of the analysis with specifics, trade-offs, and actionable detail. After discussing, ask if they want to explore another topic or if they're done.
Step 5: Wrap up
When the user is done discussing, remind them:
- Full report saved to:
feasibility-report-{product-name}-{date}.md - Offer to open it:
open {path} - Offer to adjust any scores, re-run at a different depth, or analyze a different product
Important Rules
- Evidence required: Every score needs concrete evidence. No "probably" or "likely" without data. If confidence is low, say so explicitly.
- Hidden costs are mandatory: Never skip the hidden costs checklist in economic analysis. These are the costs that blow up budgets.
- Honest confidence levels: If you lack data for a dimension, score confidence as Low and note what additional research would help.
- No false precision: Use ranges, not exact numbers. "$150K-250K" is better than "$187,500" when the estimate is rough.
- The report is the deliverable: Always produce the markdown report file. The inline summary is a preview, not a substitute.
- Graceful degradation: If WebSearch is unavailable, proceed with provided info and note the limitation. Never block on tool availability.
- Respect depth mode: Quick mode should feel fast. Don't over-analyze. Deep mode should feel thorough. Don't cut corners.