Perform cross-artifact consistency and coverage analysis across constitution, specification, plan, and task artifacts to detect gaps, conflicts, and misalignments before implementation.
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →Perform cross-artifact consistency and coverage analysis across constitution, specification, plan, and task artifacts to detect gaps, conflicts, and misalignments before implementation.
Execute development tasks to build features, producing code, tests, and configuration artifacts that satisfy specification requirements and comply with constitution standards.
Design technical architecture, select technology stack, and define implementation strategy from specifications and constitution constraints.
Validate implementation quality through custom checklists, scoring against constitution standards, specification coverage, and producing remediation recommendations.
Write feature specifications as requirements and user stories with acceptance criteria, focusing on business value and testable conditions.
Convert technical plans into actionable development tasks with dependency graphs, effort estimates, and parallelization opportunities.
Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies.
Use when you have a written implementation plan to execute in a separate session with review checkpoints between batches.
Use when implementation is complete, all tests pass, and you need to decide how to integrate the work.
Use when receiving code review feedback, before implementing suggestions. Requires technical rigor and verification, not blind implementation.
Use when completing tasks, implementing major features, or before merging to verify work meets requirements.
Use when executing implementation plans with independent tasks in the current session. Dispatches fresh subagent per task.
Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes. Requires root cause investigation first.
Use when implementing any feature or bugfix, before writing implementation code. Enforces RED-GREEN-REFACTOR cycle.
Use when starting any conversation. Establishes how to find and use skills, requiring skill invocation before any response.
Use when you have a spec or requirements for a multi-step task, before touching code. Creates bite-sized TDD implementation plans with dependency tracking.
Chain-of-thought and step-by-step reasoning prompts for complex problem solving
Constitutional AI and safety guardrail prompts for aligned LLM behavior
Content moderation API integration using OpenAI Moderation, Perspective API, and others
CrewAI multi-agent orchestration setup for collaborative AI systems
Entity and fact extraction for user profiling and personalization
Few-shot example generation and optimization for improved LLM performance
Guardrails AI validation framework setup for LLM applications. Implement input/output validation, safety checks, and structured output enforcement.
LangChain memory integration including ConversationBufferMemory, ConversationSummaryMemory, and vector-based memory
LangChain retriever implementation with various retrieval strategies for RAG applications
LangGraph checkpoint and persistence configuration for stateful workflow management
LangSmith tracing and debugging setup for LLM applications. Configure observability, capture traces, and enable debugging for LangChain/LangGraph agents.
LlamaIndex agent and query engine setup for RAG-powered agents
Mem0 memory layer integration for AI agents. Implement persistent, semantic memory for long-term context retention and personalization.
Conversation summarization for memory compression and context management
OpenTelemetry instrumentation for LLM applications with distributed tracing
Arize Phoenix observability platform setup for LLM debugging and evaluation
PII detection and redaction utilities for privacy-compliant conversational AI
Structured prompt template creation with variables, formatting, and version control
Batch embedding generation with caching, rate limiting, and multiple provider support
Hybrid search combining semantic and keyword retrieval for RAG pipelines. Implement BM25 + dense vector search with fusion strategies.
Query expansion, HyDE, and multi-query generation for improved retrieval
Cross-encoder reranking and MMR diversity filtering for improved retrieval quality
Rasa NLU pipeline configuration and training for intent and entity extraction
Redis backend for conversation state persistence and caching
Microsoft Semantic Kernel planner and plugin setup for orchestrated AI
Zep memory server integration for long-term conversation memory and user profiling
Generate visual representations of algorithm execution
Profile code performance and identify bottlenecks
Manage and generate competitive programming templates
Interface with Codeforces API for contest data, problem sets, and submissions
Automated Big-O complexity analysis of code and algorithms. Performs static analysis of loop structures, recursive call trees, space complexity estimation, and amortized analysis with detailed derivation documents.
Apply advanced DP optimizations automatically
Implement computational geometry algorithms
Provide robust computational geometry primitives