Testing WebAssembly modules including compilation verification, memory management, interop testing, and performance benchmarking of WASM components.
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →Testing WebAssembly modules including compilation verification, memory management, interop testing, and performance benchmarking of WASM components.
Automated WCAG 2.2 AA/AAA compliance testing with axe-core, Pa11y, and manual testing patterns for keyboard navigation, screen readers, and color contrast.
Testing and monitoring Core Web Vitals (LCP, FID, CLS, INP, TTFB) to ensure web performance meets Google search ranking thresholds.
Comprehensive WebDriverIO (WDIO) test automation skill for generating reliable end-to-end browser tests in JavaScript and TypeScript with Page Object Model, custom commands, and advanced synchronization strategies.
Testing webhook implementations including delivery verification, retry logic, signature validation, idempotency, and failure handling patterns.
WebSocket testing including connection lifecycle, reconnection logic, message ordering, backpressure handling, and binary frame testing.
API simulation and service virtualization with WireMock for HTTP stubbing, request matching, stateful behavior, and fault injection testing.
iOS UI testing with XCUITest framework covering element queries, gesture simulation, accessibility testing, and Xcode test plan configuration.
Cross-site scripting vulnerability testing covering reflected, stored, and DOM-based XSS with sanitization validation and CSP bypass detection.
Writing and testing YARA rules for malware detection, threat hunting, and file classification with rule validation and false-positive rate testing.
Automated web application security scanning using OWASP ZAP for finding XSS, SQL injection, CSRF, and other OWASP Top 10 vulnerabilities.
Guides iterative problem-solving by enforcing a search-read-verify-deliver workflow, structured 7-step debugging, evidence-based code review, and failure-escalation protocols across coding, product design, and team collaboration scenarios. Use when tackling complex development tasks, debugging persistent errors (2+ failures), performing thorough code reviews, or coordinating multi-step project delivery where verification at each stage is critical.
>
Collaborative problem-solving protocols. Write technical specifications (spec, or alspec), create implementation plans (plan, or alplan), or use Align-and-Do Protocol (AAD). Also generates PR/MR descriptions (aldescription) and code review reports (alreview).
Use when reviewing Go code or checking code against community style standards. Also use proactively before submitting a Go PR or when reviewing any Go code changes, even if the user doesn't explicitly request a style review. Does not cover language-specific syntax — delegates to specialized skills.
Use when writing or reviewing documentation for Go packages, types, functions, or methods. Also use proactively when creating new exported types, functions, or packages, even if the user doesn't explicitly ask about documentation. Does not cover code comments for non-exported symbols (see go-style-core).
网络质量评审、评分与调整分发。闭环关键。
Use when the user wants to generate, rewrite, validate, or prepare Git commit messages, pull request titles, or pull request descriptions.
Conventional Commits v1.0.0 branch naming, worktree naming, and commit message standards for GitHub and GitLab projects. Use when creating branches, naming worktrees, writing commits, generating commit messages, reviewing branch conventions, or setting up changelog automation. Apply when your project needs consistent git history, SemVer-driven releases, parseable changelog generation, or automatic issue closing. Trigger when the user asks how to name a worktree, create a git worktree, or organize worktrees alongside branches.
Deep research skill — broad parallel web searches, multi-source validation, confidence tracking, cited Markdown report. Supports 11 research types: market (TAM/SAM, segments, pricing, trends), domain (industry structure, ecosystem, regulatory landscape), technical (architecture, tools, benchmarks), competitive (competitor teardown, positioning, win/loss), product (feature analysis, reviews, roadmap signals), academic (literature survey, citation networks, key authors), person/org (due diligence on a company or public figure), financial (funding rounds, valuation multiples, revenue signals), legal (IP, patents, litigation, compliance), trend (emerging signals, foresight, scenario mapping), community (ecosystem health, key voices, governance, fragmentation). Use when asked to: 'research <topic>', 'deep dive on X', 'analyze the landscape', 'competitive analysis', 'compare these options', 'who are the players in Z', 'literature review', 'background on Y', 'what papers exist on X', 'product teardown', 'technology evaluation', 'regulatory overview', 'funding landscape', 'what trends are emerging in X', 'patent landscape', 'community health', or any request requiring scanning many sources and producing a cited written analysis. Apply whenever the deliverable is a thorough, sourced report rather than a quick answer. Trigger even when phrased casually: 'look into X', 'what's the deal with Y', 'dig into Z', 'I need to understand the space', 'catch me up on X'.
Remove AI-writing patterns from French text and inject voice, personality, and soul. Use when editing, reviewing, rewriting, or cleaning up French content that reads like ChatGPT/Claude output. Humanize, humanise, déslopifier. Detects and fixes 27 patterns: AI vocabulary overuse (crucial, essentiel, notamment, par ailleurs, dans le paysage), anglicisms from English-first models (faire du sens, adresser un problème), copula avoidance, formulaic openings (À l'ère de, Dans le paysage actuel), superficial participle analyses (-ant), em dash overuse, redundant adjective doublets, rule of three, sycophantic tone, typographic tells (curly quotes instead of guillemets). Trigger on: humaniser, déslopifier, rendre plus humain, nettoyer le texte IA, enlever le slop, réécrire pour que ça sonne humain, make it sound human.
Review the {{PROVIDER_KEBAB}}-webhooks skill that was generated. Your task is to validate the content accuracy against {{PROVIDER}}'s official documentation.
>
Perform a focused SEO audit on JavaScript concept pages to maximize search visibility, featured snippet optimization, and ranking potential
Instar-specific development skill used by the instar-developing agent (Echo, or any agent assigned instar-dev responsibilities). Wraps /build with mandatory side-effects review, signal-vs-authority principle check, and artifact generation. Structural enforcement via pre-commit/pre-push hooks — the instar repo refuses commits and pushes that didn't come through this skill. NOT a user-facing skill — end users should never invoke it.
Iteratively review an instar-development spec with multi-angle internal reviewers (security, scalability, adversarial, integration) and cross-model external reviewers (GPT, Gemini, Grok) until convergence, then produce a comprehensive ELI10 convergence report. Output is a spec tagged review-convergence — one of the two tags /instar-dev requires before it will touch instar source. NOT user-invocable; run by the instar-developing agent before any spec-driven /instar-dev work.
**Version / slug:** `instar-dev-skill`
**Version / slug:** `pr-gate-phase-a-commit-6-rollback-skill`
B2B/B2C marketing operating system for AI startups. Covers onboarding, customer segmentation, competitor boards, X/Twitter growth, cold email, content planning, and weekly KPI review.
Implements Syncfusion .NET MAUI Rating (SfRating) control. Use when implementing star ratings, review systems, feedback mechanisms, or product ratings in MAUI apps. Covers rating control configuration, star rating, half-star rating, custom rating shapes, rating precision, and rating events.
Take a spec produced by `brainstorm-beagle` and close its remaining gaps — both the explicit Open Questions and the latent ones the self-review missed — by researching, proposing answers, and rewritin
Turn a sharp research question into cited, gap-flagged findings by delegating to parallel web-search subagents.
Respond to review comments on a PR after evaluation and fixes
Schema for tracking code review outcomes to enable feedback-driven skill improvement. Use when logging review results or analyzing review quality.
Detects common LLM coding agent artifacts by spawning 4 parallel subagents
Review implementation plans for parallelization, TDD, types, libraries, and security before execution
Analyzes feedback logs to identify patterns and suggest improvements to review skills. Use when you have accumulated feedback data and want to improve review accuracy.
Support for both regular and beta releases.
Expert guidance for GitHub CLI (gh) operations and workflows. Use this skill for command-line GitHub operations including pull request management, issue tracking, repository operations, workflow autom
Find high-impact defects in changed code with evidence. Prioritize security, correctness, and regressions over style nits.
Use OpenAI Codex CLI as a **read-only oracle** — planning, review, and analysis only. Codex provides its perspective; you synthesize and present results to the user.
**Deal Parameters**
- **Announcement type:** Product launch
**Product:** AI-powered code review tool for developers
Run blameless post-mortems and retrospectives: timeline, root causes, action tracker.
- **Review type:** Incident (Production Outage)
**Incident ID:** INC-2026-0316-001
Run a decision process end-to-end: RAPID/DACI roles, options matrix, decision log, comms.
Run high-signal design reviews: brief, feedback log, decision record, follow-up plan.
**Project / feature:** Web Onboarding Flow -- New First-Time Admin Setup Experience