For each reviewer question thread, recall the implementer's reasoning and compose a raw answer. The answers are plain text and feed into a downstream reply-drafting skill that applies voice rules and
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →For each reviewer question thread, recall the implementer's reasoning and compose a raw answer. The answers are plain text and feed into a downstream reply-drafting skill that applies voice rules and
AI-powered code review via the codex CLI. Runs non-interactively.
Draft a concise and descriptive title and a short paragraph for a PR. Explain the purpose of the changes, the problem they solve, and the general approach taken. When the changes involve clear runtime
Assess external feedback (code reviews, AI suggestions, PR comments) with adversarial verification. Triage findings into actionable verdicts. Do not apply fixes.
Fetch unresolved review comments, top-level review body comments, and PR conversation comments from a GitHub PR and present them in a readable summary. This is a read-only skill -- it does not evaluat
Developer onboarding pipeline. Composes `/map-codebase`, `/review-tooling`, and `/review-agentic-setup` with inline agents, then synthesizes everything into `.turbo/onboarding.md` and `.turbo/onboardi
Independent peer review via codex. Translates a natural-language review request into a codex-specific prompt so invocations stay implementation-agnostic.
At the start of every invocation (including re-runs from Step 7), use `TaskCreate` to create a task for each step:
Locate the Claude Code transcript that produced a given change and extract the implementer's reasoning. Useful for answering reviewer questions, writing post-hoc explanations, or recovering forgotten
Loop the review pipeline over a planning artifact until no new findings are accepted. Writes back to the artifact file(s) in place. Supports plans, shells, and specs.
Draft replies for a processed review-thread list, confirm with the user, and post the surviving drafts.
Fetch unresolved review comments from a GitHub PR (inline threads, review-body observations, and issue-comment observations from the PR conversation), evaluate each one, fix or skip based on confidenc
Detect agentic coding infrastructure and flag gaps across Claude Code and Codex CLI conventions. Analysis only. Does not install or configure anything.
Review code against type-specific criteria. Runs internal reviews and `/peer-review` in parallel by default. Returns combined structured findings.
Detect package managers and discover outdated or vulnerable dependencies. Analysis only. Does not upgrade.
Review a planning artifact against type-specific criteria. Runs internal review and `/peer-review` in parallel by default. Returns combined structured findings.
Fetch PR context, run a comprehensive code review, evaluate findings, and dispatch accepted findings to implementation.
Detect dev tooling infrastructure and flag gaps. Analysis only. Does not install or configure tools.
Review the current conversation to extract durable lessons and route each one to the right knowledge layer.
Review code for reuse, quality, efficiency, and clarity issues, then fix them.
Upgrade project dependencies, researching breaking changes for major version updates.
Navigate privacy regulations (GDPR, CCPA), review DPAs, and handle data subject requests. Use when reviewing data processing agreements, responding to data subject access or deletion requests, assessing cross-border data transfer requirements, or evaluating privacy compliance.
Review contracts against your organization's negotiation playbook, flagging deviations and generating redline suggestions. Use when reviewing vendor contracts, customer agreements, or any commercial agreement where you need clause-by-clause analysis against standard positions.
Programmatically edit Word documents (.docx) with live preview and track changes via SuperDoc VS Code extension. Use when editing DOCX files, making tracked changes, redlining, marking up contracts, or when the user wants to modify Word documents with insertions/deletions visible. Triggers on docx, Word, track changes, redline, markup.
Assess and classify legal risks using a severity-by-likelihood framework with escalation criteria. Use when evaluating contract risk, assessing deal exposure, classifying issues by severity, or determining whether a matter needs senior counsel or outside legal review.
Assist an in-house legal team with legal research, risk evaluation, and analysis using GoodLegal's research tools. Do not provide legal advice — flag that analyses must be reviewed by qualified legal
Use this skill whenever a lawyer or mediator needs help analyzing a dispute for mediation purposes. This includes: reviewing case materials (pleadings, contracts, correspondence, evidence) to identify issues in dispute, summarizing each party's position and interests, conducting legal analysis of the key issues, proposing mediation strategies or settlement directions, and preparing for mediation sessions. Trigger this skill when the user mentions 'mediation', 'dispute analysis', 'settlement', 'dispute resolution', 'identify issues in dispute', 'party positions', 'mediation brief', 'case analysis for mediation', 'ADR', 'mediation preparation', 'caucus strategy', 'settlement options', or any request to analyze a conflict between two or more parties with the goal of finding resolution. Also trigger when the user uploads case files and asks for a structured breakdown of who wants what, what the core disagreements are, or how the case might settle. Even if the user doesn't explicitly say 'mediation', trigger when the context involves analyzing opposing positions in a dispute with a resolution-oriented (rather than litigation-oriented) goal.
Guide to review incoming one-way (unilateral) commercial NDAs in a jurisdiction-agnostic way, from either a Recipient or Discloser perspective (user-selected), producing a clause-by-clause issue log with preferred redlines, fallbacks, rationales, owners, and deadlines.
Screen incoming NDAs and classify them as GREEN (standard), YELLOW (needs review), or RED (significant issues). Use when a new NDA comes in from sales or business development, when assessing NDA risk level, or when deciding whether an NDA needs full counsel review.
NIL (Name, Image, and Likeness) contract analysis for NCAA student-athletes from the athlete's perspective. Use when user says 'review this NIL contract', 'analyze this NIL deal', 'check this athlete agreement', 'review my NIL agreement', or uploads a PDF NIL contract for review. Identifies red flags, missing protections, and compliance issues. Produces a structured review memorandum with negotiation positions. Do NOT use for general contract review, employment agreements, non-NIL endorsements, or brand-side deal analysis.
Adversarial verification for AI-generated legal content with systematic fact-checking, source validation, and quality control. Use when User requests verification of legal documents, fact-checking of regulatory content, red team review, or quality assurance before distribution to clients/stakeholders. Provides structured verification reports with severity-categorized errors, verified sources, and distribution readiness assessment.
Guide to analyze multiple documents (PDF, DOCX) against user-defined columns and produce a structured Excel output with citations. Use when the user wants to: (1) Extract specific information from multiple documents into a table, (2) Compare clauses or provisions across contracts, (3) Create a document review matrix with source citations. Triggers on: 'tabular review', 'document matrix', 'extract from documents', 'compare across documents', 'review multiple contracts'.
テンプレート由来の初期化手順メモ。`my-nook` では通常実行しないが、初期 bootstrap 時に何を揃える想定だったかを参照できる。
Use when docs/solutions/ learnings may be stale — after refactors, migrations, or dependency upgrades, when a retrieved learning feels outdated or contradicts a recently solved problem, when pattern docs no longer reflect current code, or when reviewing docs/solutions/ for accuracy.
Analyze PRs authored by the current user across all tracked repos, extract human reviewer feedback, identify improvement patterns, and produce an HTML report.
Review code changes for correctness, performance, and consistency with project conventions.
Autonomous PR review — reads diff, cross-references knowledge base, posts inline comments, and leaves an overall verdict.
Stage, commit, push, and create GitHub PRs for the current branch — always main, then automatically cherry-picks to release/stable.
Sync ADO work items and ICM incidents to the persistent knowledge log. Invoked automatically by /review, /workitem, and /pac-cli-update when knowledge is stale.
>
Commit changes, push to remote, and create a pull request. Use for completing features or fixes ready for review.
Search, summarize, and synthesize economics literature
Full autonomous research workflow — brainstorm, plan, implement, review, and document
Run multi-agent econometric review on estimation code, identification arguments, and research artifacts
Run holistic pedagogical review on lecture slides. Checks narrative arc, student prerequisites, worked examples, notation clarity, and deck pacing.
Multi-agent slide review (visual, pedagogy, proofreading). Use for comprehensive quality check before milestones.
Download, split, and deeply read academic PDFs. Use when asked to read, review, or summarize an academic paper. Splits PDFs into 4-page chunks, reads them in small batches, and produces structured reading notes — avoiding context window crashes and shallow comprehension.
Structured literature search and synthesis with citation extraction and gap identification.
Run the proofreading protocol on manuscript files. Checks grammar, typos, overflow, consistency, and academic writing quality. Produces a report without editing files.
Run the Julia code review protocol on Julia scripts. Checks code quality, type stability, parallel computing patterns, and scientific computing standards. Produces a report without editing files.