Setup consistent code quality and formatting with ESLint and Prettier.
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →Setup consistent code quality and formatting with ESLint and Prettier.
This skill should be used when setting up code quality tooling with ESLint v9 flat config, Prettier formatting, Husky git hooks, lint-staged pre-commit checks, and GitHub Actions CI lint workflow. Apply when initializing linting, adding code formatting, configuring pre-commit hooks, setting up quality gates, or establishing lint CI checks for Next.js or React projects.
Use when eSLint built-in rules including rule configuration, severity levels, and disabling strategies.
ESP32 Integrated Development Framework documentation. Covers I2S, GPIO, FreeRTOS, peripherals, and ESP32S3-specific APIs.
御見積書を作成するスキル。クライアント名、件名、項目、単価(原価)、粗利率から見積書を生成します。日付、支払条件などの情報も扱います。
Assess pilot workflows and deliverables for ethical considerations and policy alignment. Use when evaluating fairness, bias, data privacy, and societal impact to ensure responsible and values-driven pilot execution.
AI and technology ethics review including ethical impact assessment, stakeholder analysis, and responsible innovation frameworks
Use when decisions could affect groups differently and need to anticipate harms/benefits, assess fairness and safety concerns, identify vulnerable populations, propose risk mitigations, define monitoring metrics, or when user mentions ethical review, impact assessment, differential harm, safety analysis, vulnerable groups, bias audit, or responsible AI/tech.
Use when designing data pipelines, choosing between ETL and ELT approaches, or implementing data transformation patterns. Covers modern data pipeline architecture.
Designs and implements Extract, Transform, Load pipelines for data processing
Build automated ETL (Extract-Transform-Load) pipelines for construction data. Process PDFs, Excel, BIM exports. Generate reports, dashboards, and integrate with other systems. Orchestrate with Airflow or n8n.
View investment accounts, check portfolio, monitor positions, and research investments on E*TRADE
Analyze evaluation baseline results, identify failure patterns, and generate actionable insights. Use after running eval baselines or when user asks to analyze eval results, check benchmarks, investigate failures, or understand what's failing.
Find AILANG vs Python eval gaps and improve prompts/language. Use when user says 'find eval gaps', 'analyze benchmark failures', 'close Python-AILANG gap', or after running evals.
eval-recipes-runner
Review diff classification cases to determine if the LLM correctly categorized hunks, identified change type, and told a coherent story.
EvalKit is a conversational evaluation framework for AI agents that guides you through creating robust evaluations using the Strands Evals SDK. Through natural conversation, you can plan evaluations, generate test data, execute evaluations, and analyze results.
Use when user references architecture principles, at start of fresh conversation with design work, before creating any requirements/design/implementation documents, or when reviewing for compliance - grounds problem framing and solution making in all 9 principle categories using citation-manager to extract full context
Measure model performance on test datasets. Use when assessing accuracy, precision, recall, and other metrics.
Evaluate LLM systems using automated metrics, LLM-as-judge, and benchmarks. Use when testing prompt quality, validating RAG pipelines, measuring safety (hallucinations, bias), or comparing models for production deployment.
Create a Technology Evaluation Pack (problem framing, options matrix, build vs buy, pilot plan, risk review, decision memo). Use for evaluating new tech, emerging technology, AI tools, vendor selection, and tech stack decisions.
Two-stage paper screening - abstract scoring then deep dive for specific data extraction
Evaluate RAG systems with hit rate, MRR, faithfulness metrics and compare retrieval strategies. Use when testing retrieval quality, generating evaluation datasets, comparing embeddings or retrievers, A/B testing, or measuring production RAG performance.
Evaluate skills by executing them across sonnet, opus, and haiku models using sub-agents. Use when testing if a skill works correctly, comparing model performance, or finding the cheapest compatible model. Returns numeric scores (0-100) to differentiate model capabilities.
開発ツール、フレームワーク、ライブラリの評価と比較を支援します。PoC計画、複数候補の比較分析、意思決定フレームワークを提供します。技術選定、ツール導入の判断が必要な場合に使用してください。
Build systematic evaluation frameworks for LLM applications.
Systematic content evaluation framework progressing through Critique → Reinforcement → Risk Analysis → Growth. Use when reviewing writing, arguments, proposals, code documentation, or any content requiring rigorous multi-dimensional assessment. Supports interactive guided mode or autonomous full-report mode, with output as markdown report, structured checklist, or inline revision suggestions. Triggers on requests to evaluate, critique, improve, strengthen, or review content quality.
Evaluate TappsCodingAgents framework effectiveness and provide continuous improvement recommendations. Use for analyzing usage patterns, workflow adherence, and code quality metrics.
Comprehensive EVE Online project management and ESI integration toolkit. Use when updating, auditing, or integrating ESI into EVE Online projects like EVE_Rebellion, EVE_Gatekeeper, EVE_Ships, or any EVE-related development. Triggers on project updates, ESI integration, compliance checking, asset management, or multi-project coordination.
Use when documenting event objectives, requirements, and alignment across
Structure systems around asynchronous, event-based communication to decouple producers and consumers for improved scalability and resilience. Use when building loosely coupled systems with asynchronous message-based communication.
Use when generating branded QR codes for ProductTank SF events - speaker LinkedIn profiles, sponsor websites, or Slack join links. Handles single/bulk generation, correct logo mapping, GDrive upload, and mandatory test-scanning.
Create new event scraping scripts for websites. Use when adding a new event source to the Asheville Event Feed. ALWAYS start by detecting the CMS/platform and trying known API endpoints first. Browser scraping is NOT supported (Vercel limitation). Handles API-based, HTML/JSON-LD, and hybrid patterns with comprehensive testing workflows.
Use event sourcing to build auditable, replayable UI state systems compatible with concurrent rendering.
Event-driven design conventions: event envelope, naming, versioning, schema evolution rules, idempotency, ordering/partitioning, retry and dead-letter handling
This skill should be used when reviewing or editing copy to ensure adherence to Every's style guide. It provides a systematic line-by-line review process for grammar, punctuation, mechanics, and style guide compliance.
Evidently AI skill for data drift detection, model performance monitoring, target drift analysis, and automated reporting for ML systems in production.
Generate comprehensive ecosystem progress reports showing skills built, efficiency gains, quality metrics, learnings captured, and system evolution. Task-based reporting operations for status reports, efficiency analysis, quality summaries, and evolution documentation. Use when reporting ecosystem progress, communicating status to stakeholders, documenting achievements, or creating milestone reports.
Use when spec and code diverge - AI analyzes mismatches, recommends update spec vs fix code with reasoning, handles evolution with user control or auto-updates
Search for relevant code snippets, examples, and documentation from billions of GitHub repositories, documentation pages, and Stack Overflow posts. Use this skill when coding tasks require real working code examples, API usage patterns, framework setup instructions, or library implementation details to eliminate hallucinations and provide accurate, token-efficient context.
Retrieve and extract content from URLs with AI-powered summarization and structured data extraction. Use for scraping web pages, extracting specific information, summarizing articles, or crawling websites with subpages.
Web research using Exa AI search engine. Use when: user needs web search, finding articles, research papers, news, company info, or similar content. Triggers on: 'search for', 'find articles about', 'research', 'what's the latest on', 'find companies like', 'similar to [url]'.
Use Exa for semantic/neural web search. Exa understands context and returns high-quality results. Use this skill when you need to search the web for documentation, research, or any information that requires understanding meaning rather than just keyword matching. NEVER substitute web_search for Exa - they serve completely different purposes.
Uses Exa API for intelligent web searches instead of default WebFetch. Provides up-to-date information with semantic search capabilities.
Uses Exa API for intelligent web searches. Provides up-to-date information with semantic search capabilities.
Process exam/test paper documents from DOCX format into structured markdown. Use when Claude needs to: (1) Extract exam content from Word documents (.docx), (2) Analyze images in exam papers using vision tools, (3) Convert questions to structured markdown with proper image references, (4) Understand question context to match images with appropriate questions, (5) Create organized exam output with YAML frontmatter and sections
Example custom Skill demonstrating template generation and best practices. Use this as a reference when creating your own custom Skills.
Process CSV data files by cleaning, transforming, and analyzing them. Use this when users need to work with CSV files, clean data, or perform basic data analysis tasks.
You are an example skill demonstrating how Claude Code Skills work.
Create detailed implementation plans for software features and refactoring tasks. Use this skill when planning new features, architectural changes, or major refactoring efforts.