name: tech-debt-detector description: Detect and classify technical debt in AI-generated code — patterns specific to LLM outputs, shallow implementations, missing edge cases, and accumulation signals. version: "1.0.0" last-updated: "2026-04-22" model_tested: "claude-sonnet-4-6" category: eval platforms: [claude-code, codex, gemini-cli, cursor, copilot, windsurf, cline] language: en geo_relevance: [global] priority: high dependencies: mcp: [] skills: [] apis: [] data: [] update_sources:
- url: "https://newsletter.pragmaticengineer.com/p/the-impact-of-ai-on-software-engineers-2026" check_frequency: "quarterly" last_checked: "2026-04-22" license: MIT
Technical Debt Detector (AI-Generated Code)
Research shows a 2500% increase in code defects linked to AI-generated code (BCG 2026). This skill identifies debt patterns specific to LLM outputs.
When to Use
- After generating code with AI (review before merging)
- During code review of AI-assisted PRs
- When refactoring AI-generated modules
- When investigating production issues in AI-written code
- Periodic tech debt audits
AI-Specific Debt Patterns
Pattern 1: Shallow Implementation
LLMs produce code that works for the happy path but fails on edges.
Signals:
- No error handling beyond generic try/catch
- No input validation at system boundaries
- No null/undefined checks on external data
- No timeout on network calls
- Functions that work for sample data but fail at scale
Check: For each function, ask: "What happens with empty input? Null? Very large input? Concurrent access? Network failure?"
Pattern 2: Over-Abstraction
LLMs love creating abstractions even when unnecessary.
Signals:
- Wrapper classes with no added logic
- Factory patterns for single implementations
- Interface with exactly one implementation
- Helper functions called only once
- Generic framework for a specific problem
Check: "Can I delete this abstraction and use the concrete implementation directly?"
Pattern 3: Stale Patterns
LLMs use patterns from training data that may be outdated.
Signals:
- Class components in React (should be hooks)
- Callbacks instead of async/await
- var instead of const/let
- jQuery patterns in modern codebase
- Deprecated API usage
Check: "Is this the current recommended pattern for this framework version?"
Pattern 4: Copy-Paste Drift
LLMs generate similar code for similar tasks without deduplication.
Signals:
- 3+ functions with >70% similar logic
- Same validation logic repeated in multiple places
- Similar error messages with slight variations
- Duplicate type definitions
Check: "Are there 3+ places doing essentially the same thing?"
Pattern 5: Missing Observability
LLMs rarely add logging, metrics, or monitoring.
Signals:
- No logging in error paths
- No metrics on key operations
- No health check endpoints
- No structured error codes
- No request tracing
Check: "If this fails in production at 3 AM, can I diagnose the problem from logs alone?"
Pattern 6: Hardcoded Configuration
LLMs often hardcode values that should be configurable.
Signals:
- URLs, ports, timeouts in code (not config)
- Magic numbers without named constants
- Environment-specific values in source
- Feature flags as if/else in code
Check: "Can I deploy this to a different environment without changing code?"
Debt Classification
| Severity | Impact | Fix Timeline |
|---|---|---|
| Critical | Security vulnerability, data loss risk | Immediately |
| High | Production failures under load or edge cases | This sprint |
| Medium | Maintainability issues, duplication | Next sprint |
| Low | Style, naming, minor abstractions | Backlog |
Output Format
TECH DEBT SCAN: {file/module}
Generated by: {AI tool if known}
Patterns found: {count}
[HIGH] Pattern 1 (Shallow): No error handling in fetchUserData() (line 42)
[MED] Pattern 4 (Copy-Paste): validateInput() duplicated in 3 controllers
[LOW] Pattern 3 (Stale): Using deprecated fetch API options (line 78)
Estimated debt: {hours to fix}
Priority: Fix HIGH items before merging.
What This Skill Does NOT Do
- Does not fix the debt (identifies and classifies)
- Does not replace SonarQube or ESLint (complements with AI-specific patterns)
- Does not measure test coverage (use coverage tools)
- Does not block PRs (advisory only)