Simplifies working code while preserving exact behavior. Use after tests pass, during review feedback, or when code is harder to read, maintain, or verify than it needs to be without changing product behavior.
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →Simplifies working code while preserving exact behavior. Use after tests pass, during review feedback, or when code is harder to read, maintain, or verify than it needs to be without changing product behavior.
Reviews database schemas, queries, and migrations for correctness, performance, security, and best practices. Use when reviewing SQL migration files or when the user mentions database review, schema review, or query audit.
Reviews UI/UX designs, wireframes, and design systems for usability, accessibility, consistency, and implementation feasibility. Use when reviewing design specs or when the user mentions design review, UX review, or design feedback.
Provides Docker and Docker Compose patterns including multi-stage builds, networking, volumes, and production configurations. Use when working with Dockerfile or docker-compose.yml, or when the user mentions Docker, containers, or containerization.
Provides .NET and ASP.NET Core patterns for REST APIs, Entity Framework, dependency injection, and middleware. Use when working with C# files (*.cs, *.csproj) or when the user mentions .NET, ASP.NET Core, C#, or Entity Framework.
Validates a software product, service, or feature against readiness gates before advancing to the next delivery phase. Use when planning a phase transition or when the user mentions gate check, phase review, or readiness validation.
Conducts a structured milestone review analyzing delivered features, metrics, blockers, and readiness for the next phase. Use when completing a milestone or when the user mentions milestone review or phase gate.
Reviews mobile app code and design for platform guidelines compliance, performance, accessibility, and offline behavior. Use when reviewing a mobile app feature or when the user mentions mobile review, iOS guidelines, or app store compliance.
Provides PostgreSQL patterns for query optimization, schema design, indexing strategies, RLS, and security. Use when working with PostgreSQL SQL files or when the user mentions PostgreSQL, Postgres, pgvector, Supabase, or database optimization.
Writes blameless postmortems with root cause analysis, incident timelines, contributing factors, and action items. Use when conducting incident reviews or when the user mentions postmortem, root cause analysis, or blameless review.
Creates and formats pull request titles, descriptions, and linked issue references following conventional commit standards. Use when creating or updating a pull request or when the user mentions PR description, pull request, or opening a PR.
Processes code review feedback systematically by classifying findings, deciding fix or reject with evidence, applying approved fixes, and re-verifying before marking comments resolved.
Generates a sprint or milestone retrospective analyzing completed work, velocity, blockers, and patterns to produce actionable insights. Use when ending a sprint or milestone, or when the user mentions retrospective, retro, or sprint review.
Reviews a product, technical, API, UI, or implementation spec for completeness, testability, architectural fit, and readiness before planning or implementation.
Conducts a comprehensive security audit covering web application vulnerabilities, API security, OWASP Top 10, and security hardening recommendations. Use when auditing a codebase for security or when the user mentions security audit, penetration testing, or vulnerability scan.
Reviews any business decision, plan, or strategy through the minimalist entrepreneur lens. Use when someone wants a gut-check on a business decision, wants to simplify their approach, or needs to decide between options.
Executes an approved implementation plan task-by-task with a fresh implementer subagent per task and two review gates. Use when a plan is approved, tasks are mostly sequential, and quality gates are needed without full orchestrate/fork-join overhead.
Orchestrates the backend team of technical-director, backend-developer, data-engineer, and security-engineer to design, implement, and review a backend system end-to-end. Use when a backend feature needs coordinated multi-specialist delivery.
Conducts comprehensive multi-perspective architecture reviews with all team members.
Conducts focused reviews from a specific specialist's perspective.
When the user runs `/plan-execute {plan-file-path}`, start the "orchestrated plan execution" workflow:
When the user runs `/plan-review {plan-file-path}`, start the "adversarial plan iteration" workflow:
Medical AI paper optimization for AI search engines (Perplexity, ChatGPT web, Elicit, Consensus, SciSpace) and RAG-based literature tools. Applies when drafting or reviewing titles, abstracts, structured summary boxes (Key Points / Research in Context / Plain-Language Summary), manuscripts for high-impact medical AI journals (Lancet Digital Health, Radiology, Radiology-AI, npj Digital Medicine, Nature Medicine), preprints (medRxiv/arXiv), GitHub README + CITATION.cff + Zenodo archives, and Hugging Face model/dataset cards. Integrates TRIPOD+AI, CLAIM 2024, STARD-AI, TRIPOD-LLM, DECIDE-AI reporting requirements with generative engine optimization (GEO) principles. Produces a visible pass/fail checklist.
Systematic review and meta-analysis pipeline for medical research. Covers protocol registration (PROSPERO), search strategy, screening, data extraction, risk of bias assessment (QUADAS-2/ROBINS-I), statistical synthesis (bivariate/HSROC for DTA, random-effects for intervention), and PRISMA-compliant reporting. Supports both DTA and intervention meta-analyses.
Peer review assistant for medical journals. Generates structured review drafts with journal-specific formatting. Constructive developmental tone with systematic manuscript analysis.
Parse peer reviewer comments and generate a structured Response to Reviewers document with tracked manuscript changes. Classifies comments as MAJOR/MINOR/REBUTTAL, coordinates new analyses with /analyze-stats and /make-figures, and produces cover letter for editor.
Pre-submission self-review for the user's own manuscripts, applying a reviewer perspective. Systematic check across 10 categories with research-type branching. Outputs Anticipated Major/Minor Comments with severity framing and optional R0 numbering for /revise pipeline integration.
Full-pipeline medical/scientific paper writing. 8-phase IMRAD workflow from outline to submission-ready manuscript. Supports original articles, case reports, meta-analyses, AI validation studies, animal studies, and technical notes. Do NOT trigger for self-checking (use self-review instead).
Answer questions about code, architecture, and technical decisions — no implementation. Trigger on questions asking 'why', 'what does this do', 'what is the purpose of', 'explain', 'what's the difference', 'compare', or 'what are the tradeoffs' — even when referencing specific files, code snippets, or inline code. The key signal is the user wants to UNDERSTAND something, not change it. Do NOT trigger for requests to build, fix, plan, review, research, or add/modify code.
Implement, build, create, or add any feature, endpoint, page, component, or functionality. Use this skill whenever the user asks you to write new code or make code changes — whether it's adding an API endpoint, building a UI page, creating an export feature, wiring up a webhook, implementing a search/filter, or any other hands-on coding task. This is the default skill for all 'build this', 'add this', 'create this', 'wire up', 'implement' requests. Covers the full cycle: clarify requirements, plan if needed, write code, verify, and review. Do NOT use for pure research, debugging, documentation, or explanation — only when the user wants working code delivered.
Use when the user wants a written, reviewable plan or spec produced before coding starts. Triggers on: mapping out changes without implementing, thinking through risks of upgrades or migrations, evaluating approaches before committing to one, writing specs for team review, phasing work into stages, or any request that explicitly defers coding ('don't implement yet', 'before we build'). The distinguishing signal is that the user wants a plan artifact — not implementation, not a conversational answer. MUST activates inside Claude's native plan mode to have a better planning behavior.
Specialized visual and multimedia processing tools. Use this skill whenever a task involves complex visual content — UI mockups, dense screenshots, design images, charts, artwork — where precise details like spacing, hex colors, font sizes, and component hierarchy need to be extracted accurately. Also use for: reviewing or auditing existing UI against designs, comparing screenshots for visual regressions, transcribing audio/video, extracting data from PDFs with complex layouts, and generating images. Trigger whenever the user wants to implement from a design, review or compare UI screenshots, analyze visual details precisely, describe artwork or aesthetic content, or process any media file (audio, video, PDF).
Generates a Claude Code configuration tailored to a specific project. Use whenever the user wants to prime a project, set up claude for a repo, bootstrap claude config, or re-prime/refresh an already-primed project (for example after `/prime-sync` pulled new starter content). Triggers on 'prime', 'prime this project', 'optimus-prime', 're-prime', 'refresh claude config', 'regenerate CLAUDE.md', 'set up claude for this repo'. Deeply analyzes the real codebase and builds project-specific skills, rules, and CLAUDE.md — not generic boilerplate. For ongoing config health checks and proposal review, use `self-evolve` instead.
Review code for quality, correctness, and fit. Use when the user wants judgment on code that already exists — their own changes, a teammate's patch, a PR, branch, commit, diff, staged changes, or one or more files to look over. Activate on requests like review, look over, sanity check, critique, code review, or 'is this good?' The key signal is that the user wants evaluation of existing code and its tradeoffs, not implementation, debugging, or explanation. This skill works independently, but when plans, specs, task artifacts, or prior discussion exist, use them to understand why the code exists before judging it.
Use when the user wants to work on a Claude Code skill file (SKILL.md): writing one from scratch, testing whether an existing one works well, running evals or benchmarks, improving its instructions, or fixing why it isn't triggering. Triggers on: 'make a skill for X', 'test this skill', 'run evals on my SKILL.md', 'touch-skill', sharing a SKILL.md and asking if it's ready to ship. The key signal is intent to create, validate, or improve a skill — not just mention one. Do NOT trigger for general Claude Code questions, hook debugging, or CLAUDE.md configuration.
Use when the user wants to know if something works — the answer requires running code, not analyzing it. Output is a verdict backed by evidence: passed, failed, or broken. Primary triggers: 'run the tests', 'does X still work after my change?', 'did the merge break anything?', 'verify the fix worked', 'check if the endpoint returns X', 'confirm nothing regressed', 'run tests/unit/test_foo.py', 'let me know the results', 'make sure my changes didn't break anything'. Hard stops — do NOT use for: reviewing test code for quality/coverage gaps, debugging why test infrastructure/databases/seed scripts are misbehaving, writing or fixing tests, diagnosing root causes of unexpected behavior. The deciding question: is the user asking for the result of executing something, or asking for help understanding/analyzing/improving something? If it's the latter, use diagnose or review-code instead.
Security audit, hardening, threat modeling (STRIDE/PASTA), Red/Blue Team, OWASP checks, code review, incident response, and infrastructure security for any project.
Use when a coding task should be driven end-to-end from issue intake through implementation, review, deployment, and acceptance verification with minimal human re-intervention.
Orchestrate autonomous AI development pipelines through your Kanban board (Asana, GitHub Projects, Linear). Manages multi-worker Claude Code dispatch, deterministic quality gates, adversarial review, per-task cost tracking, and crash-proof pipeline execution.
The AI native file format. EXIF for AI — stamps every file with trust scores, source provenance, and compliance metadata. Embeds into 20+ formats (DOCX, PDF, images, code). EU AI Act, SOX, HIPAA auditing.
Scrape reviews, ratings, and brand mentions from multiple platforms using Apify Actors.
Analyze competitor strategies, content, pricing, ads, and market positioning across Google Maps, Booking.com, Facebook, Instagram, YouTube, and TikTok.
Extract product data, prices, reviews, and seller information from any e-commerce platform using Apify's E-commerce Scraping Tool.
This skill ensures all code follows security best practices and identifies potential vulnerabilities. Use when implementing authentication or authorization, handling user input or file uploads, or creating new API endpoints.
Use when a coding task must be completed against explicit acceptance criteria with minimal user re-intervention across implementation, review feedback, deployment, and runtime verification.
Transform code reviews from gatekeeping to knowledge sharing through constructive feedback, systematic analysis, and collaborative improvement.
Deep audit before GitHub push: removes junk files, dead code, security holes, and optimization issues. Checks every file line-by-line for production readiness.
>
This skill name is kept for compatibility.
Run autonomous research tasks that plan, search, read, and synthesize information into comprehensive reports.