>
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →>
You are a senior engineer conducting PR reviews with zero tolerance for mediocrity and laziness. Your mission is to ruthlessly identify every flaw, inefficiency, and bad practice in the submitted code
Writing and authoring Quarto documents (.qmd), including code cell options, figure and table captions, cross-references, callout blocks (notes, warnings, tips), citations and bibliography, page layout and columns, Mermaid diagrams, YAML metadata configuration, and Quarto extensions. Also covers converting and migrating R Markdown (.Rmd), bookdown, blogdown, xaringan, and distill projects to Quarto, and creating Quarto websites, books, presentations, and reports.
>
>
Best practices for writing R package tests using testthat version 3+. Use when writing, organizing, or improving tests for R packages. Covers test structure, expectations, fixtures, snapshots, mocking, and modern testthat 3 patterns including self-sufficient tests, proper cleanup with withr, and snapshot testing.
Guide plugin development workflow — editing skills, agents, hooks, or eval framework in this repo. Use when modifying files in plugins/elixir-phoenix/, lab/eval/, or lab/autoresearch/. Ensures changes pass eval, lint, and tests before committing.
Generate X/Twitter release promotion posts with ASCII tables and CodeSnap rendering. Use when writing release posts, promotion tweets, plugin announcements, or preparing social media content for new versions.
Deep qualitative analysis of high-signal sessions. Spawns subagents with v2 template, synthesizes patterns, compares against known findings. Use after /session-scan.
Analyze trends across session metrics. Computes windowed aggregates, deltas, and compares against MEMORY.md findings. Use periodically for progress tracking.
Analyze skill effectiveness across sessions. Computes per-skill metrics (action rate, friction, outcomes), identifies degrading skills, and generates improvement recommendations. Requires session-scan data in metrics.jsonl.
Analyze Phoenix context boundaries and module coupling via mix xref. Use when checking cross-context calls, validating dependencies, before splitting modules, or reviewing architecture.
Challenge mode reviews - rigorous questioning before approving changes. Use when you want thorough scrutiny of Ecto changes, LiveView events, or PR readiness.
Searchable Elixir/Phoenix/Ecto solution documentation system with YAML frontmatter. Builds institutional knowledge from solved problems. Use when consulting past solutions before investigating new issues.
Use for large features spanning multiple contexts, new domain modules, or when the user wants autonomous end-to-end implementation. Runs the full plan-implement-review-compound cycle with specialist agents and Iron Laws enforcement.
Route ambiguous Phoenix/LiveView/Ecto work requests to the correct /phx: workflow. Use when intent is unclear, mixed (bug fix vs. refactor), or scope is ambiguous.
Investigate bugs and errors in Elixir/Phoenix — root-cause analysis for crashes, exceptions, stack traces, test failures. Use --parallel for deep 4-track investigation.
Scan Ecto code for N+1 anti-patterns — Repo calls in loops, missing preloads, unpreloaded associations. Use when excessive queries reported or wanting an N+1 audit.
Oban job processing — workers, perform/1 (OSS) and process/1 (Pro), queues, cron, retries, unique jobs, idempotency, Oban Pro (Workflow, Batch, Chunk, Smart Engine), Testing. Use when writing Oban workers, queue config, or debugging jobs.
Analyze Elixir/Phoenix performance — N+1 queries, assign bloat, ecto optimization, genserver bottlenecks. Use when slowness, timeouts, or high memory reported.
Address PR review comments on Elixir/Phoenix code — fetch comments, draft responses, optionally fix code. Use when the user shares a PR URL or mentions reviewer feedback.
Implement small Phoenix changes without planning — add validations, update routes, fix components, create migrations. Use for single-file edits under 50 lines.
Review code with parallel agents — tests, security, Ecto, LiveView, Oban. Use after implementation to catch bugs and anti-patterns before committing.
Analyze Elixir/Phoenix technical debt — duplicates, refactoring opportunities, credo issues. Use when asked about code quality, cleanup, or what to improve.
Elixir testing patterns — ExUnit, Mox, factories, LiveView test helpers. Use when working on *_test.exs, test/support/, factory files, or fixing test failures.
Tidewave MCP runtime tools — debugging, smoke testing, live state inspection, SQL queries, hex docs. Use when evaluating code in a running Phoenix app.
Triage review findings interactively — approve, skip, or prioritize each issue. Use after /phx:review to filter findings before fixing.
Verify Elixir/Phoenix changes — compile, format, and test in one loop. Use after implementation, before PRs, or after fixing bugs.
Execute Elixir/Phoenix plan tasks with progress tracking. Use after /phx:plan to implement features with mix compile and mix test verification after each step, or --continue to resume interrupted work.
This skill should be used only when the user explicitly asks to use `$ralph-specum-feedback`, or explicitly asks Ralph Specum in Codex to draft or submit feedback.
This skill should be used only when the user explicitly asks to use `$ralph-specum-status`, or explicitly asks Ralph Specum in Codex for status or active spec progress.
This skill should be used when generating spec artifacts (research.md, requirements.md, design.md, tasks.md), formatting agent output, structuring phase results, or when any Ralph agent needs guidance on concise, scannable output formatting. Applies to all Ralph spec phase agents.
This skill should be used when running an interactive interview before a spec phase, gathering requirements through dialogue, asking the user clarifying questions before delegating to a subagent, or when any Ralph phase command (research, requirements, design, tasks) needs adaptive brainstorming dialogue. Covers the 3-phase algorithm (Understand, Propose Approaches, Confirm and Store).
Shared changelog conventions and formatting rules referenced by /create-changelog and /update-changelog. Not typically invoked directly.
Consult ChatGPT Pro via ChatGPT browser automation for problems that resist standard approaches.
Analyze what changed and generate a comprehensive test plan covering four escalating levels of testing depth.
Decompose a specification file into shells at `.turbo/shells/<spec-slug>-NN-<title>.md`. Each shell represents one unit of work for a separate Claude Code session.
Guide a collaborative discussion to explore a project idea, then synthesize the conversation into a comprehensive specification at `.turbo/specs/<slug>.md`.
Assess external feedback (code reviews, AI suggestions, PR comments) with adversarial verification. Triage findings into actionable verdicts. Do not apply fixes.
Execute multi-level exploratory testing that goes beyond smoke testing to actively find bugs through escalating test scenarios.
Fetch unresolved review comments, top-level review body comments, and PR conversation comments from a GitHub PR and present them in a readable summary. This is a read-only skill -- it does not evaluat
Post-implementation QA workflow: tests, code polishing, commit, and self-improvement.
Identify dead code in a codebase. **Core rule: code only used in tests is still dead code.** Only production usage counts.
Create distinctive, production-grade frontend interfaces with high design quality. Use when the user asks to build landing pages, websites, dashboards, web components, or any frontend UI. Generates creative, polished code that avoids generic AI aesthetics.
Shared writing style rules for GitHub-facing output (PR comments, PR descriptions, PR titles). Differentiates insider vs outsider voice based on author association. Not typically invoked directly — loaded by other skills before composing GitHub text.
Validate improvements from `.turbo/improvements.md` and run one lane per session: trivial, investigate, or standard. Mixing lanes in a single run tangles commits, so the skill processes exactly one la
Systematic methodology for finding the root cause of bugs, failures, and unexpected behavior. Cycle through characterize-isolate-hypothesize-test steps, with oracle escalation for hard problems. Diagn
Capture improvement opportunities discovered during work so they don't get silently dropped. Appends to a project-level `.turbo/improvements.md` file that serves as a backlog of actionable ideas.
Developer onboarding pipeline. Composes `/map-codebase`, `/review-tooling`, and `/review-agentic-setup` with inline agents, then synthesizes everything into `.turbo/onboarding.md` and `.turbo/onboardi
Pick the next shell from `.turbo/shells/` whose dependencies are satisfied, then carry it through the planning pipeline: expand → refine → self-improve → halt.