Structured debugging methodology using hypothesis-driven investigation, log analysis, and bisection to isolate and resolve defects.
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →Structured debugging methodology using hypothesis-driven investigation, log analysis, and bisection to isolate and resolve defects.
Test-first development practice where test specifications are written before production code, integrated into plan tasks as mandatory first sub-steps.
Hierarchical coordination and drift detection with frequent checkpoints, shared memory coherence validation, role specialization enforcement, and short task cycles.
Multi-agent swarm formation and coordinated execution with topology-aware agent deployment, consensus protocols, and anti-drift enforcement.
Perform cross-artifact consistency and coverage analysis across constitution, specification, plan, and task artifacts to detect gaps, conflicts, and misalignments before implementation.
Execute development tasks to build features, producing code, tests, and configuration artifacts that satisfy specification requirements and comply with constitution standards.
Design technical architecture, select technology stack, and define implementation strategy from specifications and constitution constraints.
Validate implementation quality through custom checklists, scoring against constitution standards, specification coverage, and producing remediation recommendations.
Write feature specifications as requirements and user stories with acceptance criteria, focusing on business value and testable conditions.
Convert technical plans into actionable development tasks with dependency graphs, effort estimates, and parallelization opportunities.
Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies.
Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes. Requires root cause investigation first.
Use when implementing any feature or bugfix, before writing implementation code. Enforces RED-GREEN-REFACTOR cycle.
Use when you have a spec or requirements for a multi-step task, before touching code. Creates bite-sized TDD implementation plans with dependency tracking.
Constitutional AI and safety guardrail prompts for aligned LLM behavior
Content moderation API integration using OpenAI Moderation, Perspective API, and others
Few-shot example generation and optimization for improved LLM performance
OpenTelemetry instrumentation for LLM applications with distributed tracing
Query expansion, HyDE, and multi-query generation for improved retrieval
Rasa NLU pipeline configuration and training for intent and entity extraction
Interface with Codeforces API for contest data, problem sets, and submissions
Automated Big-O complexity analysis of code and algorithms. Performs static analysis of loop structures, recursive call trees, space complexity estimation, and amortized analysis with detailed derivation documents.
Implement computational geometry algorithms
Provide robust computational geometry primitives
Select optimal graph algorithm based on problem constraints
Curated bank of interview problems organized by company, pattern, and difficulty. Provides problem recommendations, coverage tracking, weak area identification, and premium problem alternatives for FAANG interview preparation.
Fetch and parse LeetCode problems with metadata, constraints, examples, hints, difficulty ratings, and related problems. Integrates with LeetCode API for comprehensive problem data retrieval.
Apply language-specific micro-optimizations
Generate optimized prime sieves and factorization routines
Compare multiple solutions for correctness and performance
Generate comprehensive test cases including edge cases, stress tests, and counter-examples for algorithm correctness verification. Supports random generation, constraint-based generation, and brute force oracle comparison.
>
Generate BATS test structure and fixtures for shell script testing with setup/teardown, assertions, and mocking.
Generate Bubble Tea (Go) TUI application structure with models, commands, and views using the Elm architecture.
Generate Chocolatey package for Windows CLI distribution.
Set up E2E test harness for CLI applications with process spawning and assertions.
Create mock stdin utilities for interactive CLI testing.
Generate cross-platform path handling utilities for Windows, macOS, and Linux compatibility in CLI applications.
Generate Ink (React for CLI) components for terminal UIs with hooks, state management, and layout components.
Set up MCP Inspector for debugging and testing MCP servers with request logging, response inspection, and protocol validation.
Create mock MCP client for server testing with request/response simulation.
Configure PyInstaller for Python binary builds with spec files and bundling options.
Generate .shellcheckrc configuration with appropriate rules, exclusions, and severity settings for shell script linting.
Generate Textual (Python) TUI application structure with widgets, screens, and CSS styling.
Set up testing utilities for TUI components with ink-testing-library and Bubble Tea testing.
Discover and document existing API endpoints from code, logs, and traffic analysis
Generate characterization tests to capture and verify existing behavior before migration
Generate contract tests for API migrations with consumer-driven contracts and provider verification
Generate OpenAPI specifications from code or legacy APIs with schema inference and documentation
Analyze test coverage and identify gaps before migration to ensure adequate safety nets