dart-flutter-mcp
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →dart-flutter-mcp
DART testing patterns - unit tests, integration tests, CI validation
darwin-godwin-machine
Build production-grade interactive dashboards with Plotly Dash - enterprise features, callbacks, and scalable deployment
Automatically convert uploaded data (CSV, Excel, JSON) into complete interactive dashboards with zero user input required. Detects patterns in PPC reports, sales data, analytics exports, and business metrics - then generates insights, recommendations, and visualizations instantly. Works seamlessly with CURV design system for on-brand outputs with tabs, funnels, filters, and multi-view layouts.
Trading dashboard P&L visualization with profit tracker integration, win-rate overlays, R-multiples, and configurable settings
GPU-accelerated frame extraction for Movie_F dashcam videos. This skill should be used when the user needs to extract frames from Movie_F category dashcam videos placed in the Desktop CARDV folder. Extracts 3 frames per video (BEGIN, MIDDLE, END) using NVIDIA CUDA acceleration with automatic gap analysis, parallel processing, and strict error handling. This is specifically designed for Movie_F category only.
>
>
Dynamic Application Security Testing execution and management. Configure and execute OWASP ZAP and Nuclei scans, run authenticated scanning, manage scan policies and scope, correlate findings with SAST results, and generate comprehensive vulnerability reports.
Database access layer guidelines for Quantum Skincare's Prisma-based data-access library. Covers Prisma schema design, DAO patterns, UUID primary keys, PostgreSQL role-based access control (RBAC), migration workflows, type-safe queries, transaction handling, soft deletes, and testing strategies. Use when working with Prisma schema, DAOs, database migrations, or data access patterns in libs/data-access.
Expert in business intelligence, SQL, data visualization, and translating data into actionable business insights.
Inventory available datasets, instrumentation gaps, and data quality considerations for the initiative.
Generates comprehensive data cleaning and preprocessing pipelines using pandas, polars, or PySpark with best practices for handling messy data.
Clean and standardize vehicle insurance data following established business rules.
Operating model for defining, enforcing, and auditing BI data contracts.
Create, validate, test, and manage data contracts using the Open Data Contract Specification (ODCS) and the datacontract CLI. Use when working with data contracts, ODCS specifications, data quality rules, or when the user mentions datacontract CLI or data contract workflows.
Data contracts สำหรับกำหนด schema, quality expectations และ SLAs ระหว่าง data producers และ consumers
Use this skill when designing or reviewing data pipelines, ETL processes, data warehouses, streaming systems, or any system where data movement, transformation, and quality are primary concerns. Applies data engineering thinking to specifications, designs, and implementations.
Use when exporting data for ad platforms (Google Ads, Meta) or working with project datasets. Documents exact CSV formats for Enhanced Conversions, Customer Match, and project data schemas.
Procedures and playbooks for responding to data quality incidents, data loss, corruption, and pipeline failures.
Build new data ingestion providers following the FF Analytics registry pattern. This skill should be used when adding new data sources (APIs, files, databases) to the data pipeline. Guides through creating provider packages, registry mappings, loader functions, storage integration, primary key tests, and sampling tools following established patterns.
Provides architectural guidance for data lake design including partitioning strategies, storage layout, schema design, and lakehouse patterns. Activates when users discuss data lake architecture, partitioning, or large-scale data organization.
Bronze Layer(LLM抽出ログ層)とGold Layer(確定データ層)の2層アーキテクチャ設計。LLM抽出結果の履歴管理と人間修正の保護を実現。抽出処理の実装、ExtractionLogの使用、is_manually_verifiedフラグの扱いに関するガイダンスを提供。
Service-scoped data orchestration for TMNL. Invoke when implementing search, data streams, kernel systems, or Effect-based DAQ. Covers hybrid dispatch (fibers + workers), Atom-as-State pattern, and progressive streaming.
Data mapping patterns for transforming API responses to internal types
Expert-level data mesh architecture, domain-oriented ownership, data products, federated governance, and self-serve platforms
You are a database migration expert specializing in safe schema changes and data migrations. Your goal is to ensure migrations are safe, reversible, and won't corrupt production data.
Create safe, reversible database migration scripts with rollback capabilities, data validation, and zero-downtime deployments. Use when changing database schemas, migrating data between systems, or performing large-scale data transformations.
Design data models with Pydantic schemas, comprehensive validation rules,
Expert data engineer for ETL/ELT pipelines, streaming, data warehousing. Activate on: data pipeline, ETL, ELT, data warehouse, Spark, Kafka, Airflow, dbt, data modeling, star schema, streaming data, batch processing, data quality. NOT for: API design (use api-architect), ML training (use ML skills), dashboards (use design skills).
Design and troubleshoot robust data pipelines with comprehensive quality validation, error handling, and monitoring capabilities for bioinformatics and data processing workflows
Monitor and troubleshoot dual-pipeline data collection systems on GCP. This skill should be used when checking pipeline health, viewing logs, diagnosing failures, or monitoring long-running operations for data collection workflows. Supports Cloud Run Jobs (batch pipelines) and VM systemd services (real-time streams).
Process data files through transformation pipelines with validation, cleaning, and export. Use for CSV/Excel/JSON data processing, encoding handling, batch operations, and data transformation workflows.
Data product design patterns with contracts, SLAs, and governance for building self-serve data platforms using Data Mesh principles.
Comprehensive guide to data quality validation, testing frameworks, anomaly detection, and data observability for production data pipelines
Systematic framework for catching data quality issues, query errors, metric calculation problems, and inconsistencies before they affect analysis results.
Set up database replication for high availability and disaster recovery. Use when configuring master-slave replication, multi-master setups, or replication monitoring.
Manage data lifecycle with automated retention and archiving.
Comprehensive data safety auditor for Vue 3 + Pinia + IndexedDB + PouchDB applications. Detects data loss risks, sync issues, race conditions, and browser-specific vulnerabilities with actionable remediation guidance.
Expert-level data science, analytics, visualization, and statistical modeling
Create database seed scripts with realistic test data for development and testing. Use when setting up development environment or creating demo data.
Ensure Alpaca API is used for quality data, not yfinance fallback. Trigger when: (1) crypto volume filter fails unexpectedly, (2) zero-volume bars in data, (3) API key configuration issues.
Production-grade SQL optimization for OLTP systems: EXPLAIN/plan analysis, balanced indexing, schema and query design, migrations, backup/recovery, HA, security, and safe performance tuning across PostgreSQL, MySQL, SQL Server, Oracle, SQLite.
This skill should be used when reading any tabular data file (Excel, CSV, Parquet, ODS). It automatically detects and fixes common data issues including multi-level headers, encoding problems, empty rows/columns, and data type mismatches. Returns a clean DataFrame ready for analysis with zero user intervention.
Python data structure conventions for this codebase. Apply when choosing between Pydantic models, dataclasses, and other data containers.
Transform, clean, reshape, and preprocess data using pandas and numpy. Works with ANY LLM provider (GPT, Gemini, Claude, etc.).
Generate interactive validation reports with quality scoring, missing data analysis, and type checking. Combines Pandas validation, Plotly visualization, and YAML configuration for comprehensive data quality reporting.
Provides expert design guidance for creating truthful, clear, beautiful data visualizations. Focuses on **DESIGN DECISIONS ONLY**—chart selection, color strategy, visual encoding, and validation. Assumes data is accurate and prepared. Auto-activates when user mentions: data viz, dashboard, chart type, visualization, infographic
Build mathematically correct, visually prominent data visualizations for time-series charts. Use this skill when creating charts with mathematical overlays (trendlines, patterns, indicators), fixing visual artifacts (wavy lines, domain mismatches), or validating chart correctness. Focuses on technical correctness and progressive validation, not aesthetic design.