DART model loading - URDF, SDF, MJCF, SKEL parsers and dart::io unified API
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →DART model loading - URDF, SDF, MJCF, SKEL parsers and dart::io unified API
DART Python bindings (dartpy) - nanobind, wheel building, API patterns
DART testing patterns - unit tests, integration tests, CI validation
darwin-godwin-machine
Generate professional, client-ready dashboards from data files with clean design (no CURV branding). Detects patterns, creates visualizations, provides insights. Perfect for client presentations and professional reporting.
Automatically convert uploaded data (CSV, Excel, JSON) into complete interactive dashboards with zero user input required. Detects patterns in PPC reports, sales data, analytics exports, and business metrics - then generates insights, recommendations, and visualizations instantly. Works seamlessly with CURV design system for on-brand outputs with tabs, funnels, filters, and multi-view layouts.
Create HTML dashboards with KPI metric cards, bar/pie/line charts, progress indicators, and data visualizations. Use when users request dashboards, metrics displays, KPI visualizations, data charts, or monitoring interfaces.
GPU-accelerated frame extraction for Movie_F dashcam videos. This skill should be used when the user needs to extract frames from Movie_F category dashcam videos placed in the Desktop CARDV folder. Extracts 3 frames per video (BEGIN, MIDDLE, END) using NVIDIA CUDA acceleration with automatic gap analysis, parallel processing, and strict error handling. This is specifically designed for Movie_F category only.
>
>
Dynamic Application Security Testing execution and management. Configure and execute OWASP ZAP and Nuclei scans, run authenticated scanning, manage scan policies and scope, correlate findings with SAST results, and generate comprehensive vulnerability reports.
>
Database access layer guidelines for Quantum Skincare's Prisma-based data-access library. Covers Prisma schema design, DAO patterns, UUID primary keys, PostgreSQL role-based access control (RBAC), migration workflows, type-safe queries, transaction handling, soft deletes, and testing strategies. Use when working with Prisma schema, DAOs, database migrations, or data access patterns in libs/data-access.
Implement data access for the .NET 8 WPF widget host app using EF Core or Dapper. Use when creating repositories, unit of work, migrations, DbContext configuration, and query patterns while keeping clean architecture boundaries.
Comprehensive data science, machine learning, and AI guide covering Python, deep learning, NLP, LLMs, prompt engineering, and MLOps. Use when building AI models, data pipelines, or machine learning systems.
This skill provides tools and templates for analyzing datasets and generating insights.
데이터 분석 전문가. pandas, numpy, 시각화, 통계 분석 지원.
Expert in business intelligence, SQL, data visualization, and translating data into actionable business insights.
data-analyst-sql-optimization
This skill should be used when analyzing business sales and revenue data from CSV files to identify weak areas, generate statistical insights, and provide strategic improvement recommendations. Use when the user requests a business performance report, asks to analyze sales data, wants to identify areas of weakness, or needs recommendations on business improvement strategies.
Analytics engineering for reliable metrics and BI readiness. Build transformation layers, dimensional models, semantic metrics, data quality tests, and documentation. Use when you need dbt or SQL transformation strategy, metrics definition, or analytics data modeling.
Advanced data analysis, pattern detection, and insight generation from structured and unstructured datasets
Detect and mask PII (names, emails, phones, SSN, addresses) in text and CSV files. Multiple masking strategies with reversible tokenization option.
Single source of truth patterns, facts.ts structure, type safety, and data helper functions. Use when working with project data or adding new facts.
Inventory available datasets, instrumentation gaps, and data quality considerations for the initiative.
Generates comprehensive data cleaning and preprocessing pipelines using pandas, polars, or PySpark with best practices for handling messy data.
Build robust processes for data cleaning, missing value imputation, outlier handling, and data transformation for data preprocessing, data quality, and data pipeline automation
Create, validate, test, and manage data contracts using the Open Data Contract Specification (ODCS) and the datacontract CLI. Use when working with data contracts, ODCS specifications, data quality rules, or when the user mentions datacontract CLI or data contract workflows.
Data contracts สำหรับกำหนด schema, quality expectations และ SLAs ระหว่าง data producers และ consumers
Use when user needs scalable data pipeline development, ETL/ELT implementation, or data infrastructure design.
Use this skill when designing or reviewing data pipelines, ETL processes, data warehouses, streaming systems, or any system where data movement, transformation, and quality are primary concerns. Applies data engineering thinking to specifications, designs, and implementations.
Export analysis results, data tables, and formatted spreadsheets to Excel files using openpyxl. Works with ANY LLM provider (GPT, Gemini, Claude, etc.).
Use when exporting data for ad platforms (Google Ads, Meta) or working with project datasets. Documents exact CSV formats for Enhanced Conversions, Customer Match, and project data schemas.
Create professional PDF reports with text, tables, and embedded images using reportlab. Works with ANY LLM provider (GPT, Gemini, Claude, etc.).
Use when extracting structured data from medical research PDFs, parsing study characteristics, patient demographics, outcomes, and results. Invoke for systematic review data collection from papers.
>
Build new data ingestion providers following the FF Analytics registry pattern. This skill should be used when adding new data sources (APIs, files, databases) to the data pipeline. Guides through creating provider packages, registry mappings, loader functions, storage integration, primary key tests, and sampling tools following established patterns.
Provides architectural guidance for data lake design including partitioning strategies, storage layout, schema design, and lakehouse patterns. Activates when users discuss data lake architecture, partitioning, or large-scale data organization.
Data Lake architecture and management including medallion architecture (bronze/silver/gold zones), data catalog with AWS Glue, partitioning strategies, schema evolution, data quality, governance, cost optimization, S3 lifecycle policies, data retention, compliance, query optimization with Athena, data formats (Parquet, ORC, Avro), incremental processing, CDC patterns, and production best practices for scalable data lakes.
Mapping the flow of data from source to destination for transparency, impact analysis, and troubleshooting.
Data mapping patterns for transforming API responses to internal types
Plans and executes data migrations between systems, databases, and formats
You are a database migration expert specializing in safe schema changes and data migrations. Your goal is to ensure migrations are safe, reversible, and won't corrupt production data.
Plan and execute database migrations, data transformations, and system migrations safely with rollback strategies and data integrity validation. Use when migrating databases, transforming data schemas, moving between database systems, implementing versioned migrations, handling data transformations, ensuring data integrity, or planning zero-downtime migrations.
Create safe, reversible database migration scripts with rollback capabilities, data validation, and zero-downtime deployments. Use when changing database schemas, migrating data between systems, or performing large-scale data transformations.
Jupyter Notebook의 코드 품질, 문서화 수준, 실행 안정성을 표준화된 절차로 개선하는 워크플로우입니다.
Coordinates data pipeline tasks (ETL, analytics, feature engineering). Use when implementing data ingestion, transformations, quality checks, or analytics. Applies data-quality-standard.md (95% minimum).
Implements data persistence systems including DataStore patterns, session locking, data migration, error handling, and backup systems. Use when saving player progress, inventory, settings, or any persistent data.
Expert data engineer for ETL/ELT pipelines, streaming, data warehousing. Activate on: data pipeline, ETL, ELT, data warehouse, Spark, Kafka, Airflow, dbt, data modeling, star schema, streaming data, batch processing, data quality. NOT for: API design (use api-architect), ML training (use ML skills), dashboards (use design skills).
Develop and manage data ingestion, processing, and transformation pipelines for pilot projects. Use when automating ETL workflows, integrating new data sources, or building canonical datasets to support downstream analytics.