To apply automated Dart fixes, run `dart fix --apply` on the given roots to resolve suggested changes.
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →To apply automated Dart fixes, run `dart fix --apply` on the given roots to resolve suggested changes.
dart-flutter-mcp
To run Dart or Flutter tests with the agent-centric test runner, execute tests using this tool instead of shell `dart test` or `flutter test`.
DART testing patterns - unit tests, integration tests, CI validation
darwin-godwin-machine
Build production-grade interactive dashboards with Plotly Dash - enterprise features, callbacks, and scalable deployment
Automatically convert uploaded data (CSV, Excel, JSON) into complete interactive dashboards with zero user input required. Detects patterns in PPC reports, sales data, analytics exports, and business metrics - then generates insights, recommendations, and visualizations instantly. Works seamlessly with CURV design system for on-brand outputs with tabs, funnels, filters, and multi-view layouts.
Dashboard symbol_signals uses parallel lists (symbols[], signal_values[], gate_statuses[]) not dict keyed by symbol. Trigger when: (1) 'list' object has no attribute 'get', (2) .items() on symbol_signals fails.
Trading dashboard P&L visualization with profit tracker integration, win-rate overlays, R-multiples, and configurable settings
Auto-discover dashboard symbols from loaded RL models. Trigger when: (1) dashboard shows old/wrong symbols, (2) symbols mismatch between live trader and dashboard, (3) adding new models to system, (4) dashboard shows NO_MODEL for all symbols.
GPU-accelerated frame extraction for Movie_F dashcam videos. This skill should be used when the user needs to extract frames from Movie_F category dashcam videos placed in the Desktop CARDV folder. Extracts 3 frames per video (BEGIN, MIDDLE, END) using NVIDIA CUDA acceleration with automatic gap analysis, parallel processing, and strict error handling. This is specifically designed for Movie_F category only.
>
>
Dynamic Application Security Testing execution and management. Configure and execute OWASP ZAP and Nuclei scans, run authenticated scanning, manage scan policies and scope, correlate findings with SAST results, and generate comprehensive vulnerability reports.
Database access layer guidelines for Quantum Skincare's Prisma-based data-access library. Covers Prisma schema design, DAO patterns, UUID primary keys, PostgreSQL role-based access control (RBAC), migration workflows, type-safe queries, transaction handling, soft deletes, and testing strategies. Use when working with Prisma schema, DAOs, database migrations, or data access patterns in libs/data-access.
Master machine learning, data engineering, AI engineering, LLMs, prompt engineering, and MLOps. Build intelligent systems with Python.
Executive-grade data analysis with pandas/polars and McKinsey-quality visualizations. Use when analyzing data, building dashboards, creating investor presentations, or calculating SaaS metrics.
Expert in business intelligence, SQL, data visualization, and translating data into actionable business insights.
Use when writing SQL queries, optimizing database performance, or analyzing data
Data architecture - models, relationships. Use when designing data structures.
Inventory available datasets, instrumentation gaps, and data quality considerations for the initiative.
Use this skill when the user needs to analyze, clean, or prepare datasets. Helps with listing columns, detecting data types (text, categorical, ordinal, numeric), identifying data quality issues, and cleaning values that don't fit expected patterns. Invoke when users mention data cleaning, data quality, column analysis, type detection, or preparing datasets.
Update data-cleaning-implementation skill with critical pandas patterns and testing learnings from vp-e62a
Generates comprehensive data cleaning and preprocessing pipelines using pandas, polars, or PySpark with best practices for handling messy data.
Operating model for defining, enforcing, and auditing BI data contracts.
Create, validate, test, and manage data contracts using the Open Data Contract Specification (ODCS) and the datacontract CLI. Use when working with data contracts, ODCS specifications, data quality rules, or when the user mentions datacontract CLI or data contract workflows.
Data contracts สำหรับกำหนด schema, quality expectations และ SLAs ระหว่าง data producers และ consumers
Generate high-quality synthetic datasets using statistical samplers and Claude's native LLM capabilities. Use when users ask to create synthetic data, generate datasets, create fake/mock data, generate test data, training data, or any data generation task. Supports CSV, JSON, JSONL, Parquet output. Adapted from NVIDIA NeMo DataDesigner (Apache 2.0).
Use this skill when designing or reviewing data pipelines, ETL processes, data warehouses, streaming systems, or any system where data movement, transformation, and quality are primary concerns. Applies data engineering thinking to specifications, designs, and implementations.
Use when exporting data for ad platforms (Google Ads, Meta) or working with project datasets. Documents exact CSV formats for Enhanced Conversions, Customer Match, and project data schemas.
SWR-based data fetching and caching patterns used throughout the monorepo. Use this skill when implementing API interactions, creating custom data hooks, handling loading/error states, or working with mock data. Covers SWR configuration, custom hook patterns (useUserInfo, useTimesSquarePage), error handling, and mock data setup.
Procedures and playbooks for responding to data quality incidents, data loss, corruption, and pipeline failures.
Build new data ingestion providers following the FF Analytics registry pattern. This skill should be used when adding new data sources (APIs, files, databases) to the data pipeline. Guides through creating provider packages, registry mappings, loader functions, storage integration, primary key tests, and sampling tools following established patterns.
Generate actionable insights from transaction data. Use when implementing insight generators, anomaly detection, or data analysis features for financial/tax tracking applications.
Detect and fix data integrity issues automatically.
Provides architectural guidance for data lake design including partitioning strategies, storage layout, schema design, and lakehouse patterns. Activates when users discuss data lake architecture, partitioning, or large-scale data organization.
Design and manage data storage effectively. Use when working with databases, schemas, or data migrations. Covers schema design, migrations, and data integrity.
Service-scoped data orchestration for TMNL. Invoke when implementing search, data streams, kernel systems, or Effect-based DAQ. Covers hybrid dispatch (fibers + workers), Atom-as-State pattern, and progressive streaming.
Data mapping patterns for transforming API responses to internal types
Expert-level data mesh architecture, domain-oriented ownership, data products, federated governance, and self-serve platforms
You are a database migration expert specializing in safe schema changes and data migrations. Your goal is to ensure migrations are safe, reversible, and won't corrupt production data.
Create safe, reversible database migration scripts with rollback capabilities, data validation, and zero-downtime deployments. Use when changing database schemas, migrating data between systems, or performing large-scale data transformations.
Design data models with Pydantic schemas, comprehensive validation rules,
Database schema design and data modeling patterns including normalization principles (1NF-5NF), denormalization trade-offs, entity relationship design, indexing strategies, schema evolution, and domain-driven design patterns. Activates when designing new database schemas, refactoring data models, discussing normalization vs denormalization decisions, planning schema migrations, or modeling complex domain entities. Use when creating new tables/collections, redesigning existing schemas, evaluating relationship patterns, or making data integrity decisions.
Apply data-oriented architecture patterns when designing systems with polymorphic entities. This skill should be used PROACTIVELY when encountering switch statements or if/else chains that dispatch based on entity type, when designing new entity systems, or when refactoring toward extensibility. Provides registry-based dispatch, capability composition, and infrastructure-first development patterns. Complements solid-architecture with concrete implementation strategies.
Expert data engineer for ETL/ELT pipelines, streaming, data warehousing. Activate on: data pipeline, ETL, ELT, data warehouse, Spark, Kafka, Airflow, dbt, data modeling, star schema, streaming data, batch processing, data quality. NOT for: API design (use api-architect), ML training (use ML skills), dashboards (use design skills).
Design and troubleshoot robust data pipelines with comprehensive quality validation, error handling, and monitoring capabilities for bioinformatics and data processing workflows
Monitor and troubleshoot dual-pipeline data collection systems on GCP. This skill should be used when checking pipeline health, viewing logs, diagnosing failures, or monitoring long-running operations for data collection workflows. Supports Cloud Run Jobs (batch pipelines) and VM systemd services (real-time streams).
Process data files through transformation pipelines with validation, cleaning, and export. Use for CSV/Excel/JSON data processing, encoding handling, batch operations, and data transformation workflows.
Data product design patterns with contracts, SLAs, and governance for building self-serve data platforms using Data Mesh principles.