Foundational Python best practices, idioms, and code quality fundamentals - Brought to you by microsoft/hve-core
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →Foundational Python best practices, idioms, and code quality fundamentals - Brought to you by microsoft/hve-core
Run the mandatory pre-commit checks before committing code. Includes lint, type checking, and unit tests. MUST be run before every commit.
Iterative review loop using any lens agent against any target (plan, code, design). Use when you want to run a focused review cycle: spawn a fresh reviewer subagent with a specific lens, present findings and remediation to the user, apply fixes with user guidance, and repeat until clean.
Review code changes for correctness, performance, and consistency with project conventions.
Framework code review checklist - correctness, performance, concurrency, design, and style.
>
'Create a custom agent (.agent.md) for a specific job.'
Squad branching model: dev-first workflow with insiders preview channel
Generate structured code review for staged files (git staged changes) using Claude Code agents. Provides feedback before committing to catch issues early.
Use when the /systems-design mode is active. 9-phase structured design methodology -- problem framing, system classification, constraints, candidate architectures, tradeoff analysis, risk review, refinement, migration planning, and documentation. Governs conversation flow, delegation patterns, and user validation checkpoints.
Use when the /systems-design-review mode is active. 7-step design review methodology -- understand the design, classify the system, evaluate against codebase, adversarial analysis, tradeoff validation, synthesis, and action items. Governs conversation flow, delegation patterns, and user validation checkpoints.
Adversarial review of a system design from 6 critical perspectives -- SRE, security, staff engineer, finance, operator, and developer advocate. Produces a unified risk assessment. Use for INTERACTIVE on-demand reviews during a design conversation (/adversarial-review). For RECIPE-DRIVEN reviews (where prior step context is needed), use the systems-design-critic agent instead.
Browse and discover SharePoint sites, lists, document libraries, and file contents — navigate your SharePoint world without leaving the CLI.
'Create a pull request using the repository PR template. Use when asked to: create PR, open PR, push and create PR, submit PR, open pull request, send changes for review.'
'Perform a multi-perspective code review of rego-cpp changes. Use when: reviewing a release, auditing a branch diff, evaluating a PR, or performing a pre-merge code review. Launches four parallel constructive review subagents (Security, Performance, Usability, Conservative), synthesises findings, then runs a sequential adversarial gap-analysis pass. Verifies key findings, produces a unified report with severity-ranked findings and actionable remediation recommendations.'
Generate an executive assessment report from GitHub Quick Review (ghqr) scan data. Produces an executive summary, a dedicated section per validated subject with all findings, and a prioritized 30/60/90-day remediation plan. Use when the user asks for a report, executive summary, best practices posture overview, or a remediation roadmap from ghqr scan results.
Complete workflow for creating and managing ADRs (Architecture Decision Records). Includes duplicate checking, status determination, Microsoft documentation lookup, Azure cost estimates, and completeness validation. Use when you need to create or update ADRs during planning. Do not use for decisions that don't need an ADR (use conventions in plan.md).
Socratic guide for engineering practice decisions (DevOps, SRE, CI/CD, branch strategy, observability, IaC). Guides the agent to ask questions based on project context, rather than prescribing solutions. Use during planning of initial features. Do not use for application architecture decisions (use devsquad.plan) or for pipeline/infra creation (use devsquad.implement).
Iterative review loop with subagent reviewer. Use when: reviewing code changes, reviewing plans, auditing implementations, validating fixes, running code review, performing quality checks, or when /review is invoked. Spawns a fresh subagent to review a user-specified target, reports severity-tagged findings, applies approved fixes, and repeats until the user is satisfied.
This document outlines the workflow for reviewing and managing dependency update Pull Requests in the `google/osv.dev` repository.
Capsem marketing website (capsem.org). Use when editing marketing copy, adding sections, working with components, or changing the site theme. Covers site structure, data-driven content, component library, Tailwind theme, and dev workflow.
Capsem system architecture -- service daemon, per-VM processes, CLI, MCP server, guest agent, vsock, network proxy. Use when you need to understand the system design to write code, review changes, write documentation, or debug cross-component issues. Covers the service architecture, IPC protocols, vsock ports, storage modes, network policy, MITM proxy, and key source files.
Fetches comments and reviews from the current GitHub Pull Request and formats them as Markdown.
Internal guidance for composing Codex and GPT-5.4 prompts for coding, review, diagnosis, and research tasks inside the Codex Claude Code plugin
Score a repository's agentic legibility from repo-visible evidence only. Use when Codex needs to audit how easy a codebase is for coding agents to discover, bootstrap, validate, and navigate, especially for harness-engineering reviews, developer-experience audits, repo cleanup, or before/after comparisons after improving docs, tooling, or architectural constraints.
Babysit a GitHub pull request after creation by continuously polling review comments, CI checks/workflow runs, and mergeability state until the PR is merged/closed or user help is required. Diagnose failures, retry likely flaky failures up to 3 times, auto-fix/push branch-related issues when appropriate, and keep watching open PRs so fresh review feedback is surfaced promptly. Use when the user asks Codex to monitor a PR, watch CI, handle review comments, or keep an eye on failures and feedback on an open PR.
Use when you have a written implementation plan to execute in a separate session with review checkpoints
Internal guidance for presenting Codex helper output back to the user
Run a final code review on a pull request
Create the required PR-ready summary block, branch suggestion, title, and draft description for openai-agents-python. Use in the final handoff after moderate-or-larger changes to runtime code, tests, examples, build/test configuration, or docs with behavior impact; skip only for trivial or conversation-only tasks, repo-meta/doc-only tasks without behavior impact, or when the user explicitly says not to include the PR draft block.
Validate changesets in openai-agents-js using LLM judgment against git diffs (including uncommitted local changes). Use when packages/ or .changeset/ are modified, or when verifying PR changeset compliance and bump level.
Perform a release-readiness review by locating the previous release tag from remote tags and auditing the diff (e.g., v1.2.3...<commit>) for breaking changes, regressions, improvement opportunities, and risks before releasing openai-agents-js.
Triage and orient GitHub repository, pull request, and issue work through the connected GitHub app. Use when the user asks for general GitHub help, wants PR or issue summaries, or needs repository context before choosing a more specific GitHub workflow.
Test authoring guidance
Model visible context
Use when a user asks to debug or fix failing GitHub PR checks that run in GitHub Actions. Use the GitHub app from this plugin for PR metadata and patch context, and use `gh` for Actions check and log inspection before implementing any approved fix.
Build and troubleshoot Box integrations for uploads, folders, folder listings, downloads and previews, shared links, collaborations, search, metadata, event-driven automations, and Box AI retrieval flows. Use when Codex needs to add Box APIs or SDKs to an app, wire Box-backed document workflows, organize or share content, react to new files, or fetch Box content for search, summarization, extraction, or question-answering.
Cloudflare Workers CLI for deploying, developing, and managing Workers, KV, R2, D1, Vectorize, Hyperdrive, Workers AI, Containers, Queues, Workflows, Pipelines, and Secrets Store. Load before running wrangler commands to ensure correct syntax and best practices. Biases towards retrieval from Cloudflare docs over pre-trained knowledge.
Use this skill for Hugging Face Dataset Viewer API workflows that fetch subset/split metadata, paginate rows, search text, apply filters, download parquet URLs, and read size or statistics.
Build sandboxed applications for secure code execution. Load when building AI code execution, code interpreters, CI/CD systems, interactive dev environments, or executing untrusted code. Covers Sandbox SDK lifecycle, commands, files, code interpreter, and preview URLs. Biases towards retrieval from Cloudflare docs over pre-trained knowledge.
Refactor and review SwiftUI view files with strong defaults for small dedicated subviews, MV-over-MVVM data flow, stable view trees, explicit dependency injection, and correct Observation usage. Use when cleaning up a SwiftUI view, splitting long bodies, removing inline actions or side effects, reducing computed `some View` helpers, or standardizing `@Observable` and view model initialization patterns.
Vercel deployment and CI/CD expert guidance. Use when deploying, promoting, rolling back, inspecting deployments, building with --prebuilt, or configuring CI workflow files for Vercel.
Publish local changes to GitHub by confirming scope, committing intentionally, pushing the branch, and opening a draft PR through the GitHub app from this plugin, with `gh` used only as a fallback where connector coverage is insufficient.
Change size guidance (800 lines)
Vercel Agent guidance — AI-powered code review, incident investigation, and SDK installation. Automates PR analysis and anomaly debugging. Use when configuring or understanding Vercel's AI development tools.
Deploy projects to Netlify with the Netlify CLI. Use when the user wants to link a repo, validate deploy settings, run a deploy, or choose between preview and production flows.
Triage Outlook mail, extract tasks, clean up subscriptions, draft responses, and route shared mailbox work. Use when the user asks to inspect an Outlook inbox or thread, summarize open actions and deadlines, clean up newsletters, draft replies or forwards, organize mailbox follow-up work, or act on a delegated/shared Outlook mailbox.
Postgres performance optimization and best practices from Supabase. Use this skill when writing, reviewing, or optimizing Postgres queries, schema designs, or database configurations.
Audit and improve SwiftUI runtime performance from code review and architecture. Use for requests to diagnose slow rendering, janky scrolling, high CPU/memory usage, excessive view updates, or layout thrash in SwiftUI apps, and to provide guidance for user-run Instruments profiling when code review alone is insufficient.
Manage scheduling and conflicts in connected Google Calendar data. Use when the user wants to inspect calendars, compare availability, review conflicts, find a meeting room, review event notes or attachments, add or adjust reminders, place temporary holds, or draft exact create, update, reschedule, or cancel changes with timezone-aware details.