Validates database schemas, Kysely types, and migrations. Use when checking schema correctness or migration safety.
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →Validates database schemas, Kysely types, and migrations. Use when checking schema correctness or migration safety.
Database schema validation, data integrity testing, migration testing, transaction isolation, and query performance. Use when testing data persistence, ensuring referential integrity, or validating database migrations.
Language-agnostic database best practices covering migrations, schema design, ORM patterns, query optimization, and testing strategies. Activate when working with database files, migrations, schema changes, SQL, ORM code, database tests, or when user mentions migrations, schema design, SQL optimization, NoSQL, database patterns, or connection pooling.
Database workflows - schema design, migrations, query optimization. Use when designing schemas, reviewing migrations, optimizing queries, preventing N+1 problems, or working with ORMs like Prisma, Drizzle, and TypeORM.
Use when working with ES/NQ futures market data, before calling any Databento API - follow mandatory four-step workflow (cost check, availability check, fetch, validate); prevents costly API errors and ensures data quality
Databricks development guidance including Python SDK, Databricks Connect, CLI, and REST API. Use when working with databricks-sdk, databricks-connect, or Databricks APIs.
Execute SQL queries against Databricks using the DBSQL MCP server. Use when querying Unity Catalog tables, running SQL analytics, exploring Databricks data, or when user mentions Databricks queries, SQL execution, Unity Catalog, or data warehouse operations. Handles query execution, result formatting, and error handling.
Python dataclass best practices: slots, frozen, validation. Trigger when optimizing dataclasses or creating config classes.
Dataclass patterns including frozen dataclasses, slots, immutability, and value objects. Activated when designing data classes or value types.
Kailash DataFlow - zero-config database framework with automatic model-to-node generation. Use when asking about 'database operations', 'DataFlow', 'database models', 'CRUD operations', 'bulk operations', 'database queries', 'database migrations', 'multi-tenancy', 'multi-instance', 'database transactions', 'PostgreSQL', 'MySQL', 'SQLite', 'MongoDB', 'pgvector', 'vector search', 'document database', 'RAG', 'semantic search', 'existing database', 'database performance', 'database deployment', 'database testing', or 'TDD with databases'. DataFlow is NOT an ORM - it generates 11 workflow nodes per SQL model, 8 nodes for MongoDB, and 3 nodes for vector operations.
Use when developing BigQuery Dataform transformations, SQLX files, source declarations, or troubleshooting pipelines - enforces TDD workflow (tests first), ALWAYS use ${ref()} never hardcoded table paths, comprehensive columns:{} documentation, safety practices (--schema-suffix dev, --dry-run), proper ref() syntax, .sqlx for new declarations, no schema config in operations/tests, and architecture patterns that prevent technical debt under time pressure
Reviews SQL queries and DataFrame operations for optimization opportunities including predicate pushdown, partition pruning, column projection, and join ordering. Activates when users write DataFusion queries or experience slow query performance.
Curate and clean training datasets for high-quality machine learning
Create, clean, and optimize datasets for LLM fine-tuning. Covers formats (Alpaca, ShareGPT, ChatML), synthetic data generation, quality assessment, and augmentation. Use when preparing data for training.
Guide for writing Datasette plugins. This skill should be used when users want to create or develop plugins for Datasette, including information about plugin hooks, the cookiecutter template, database APIs, request/response handling, and plugin configuration.
Writing Datasette plugins using Python and the pluggy plugin system. Use when Claude needs to: (1) Create a new Datasette plugin, (2) Implement plugin hooks like prepare_connection, register_routes, render_cell, etc., (3) Add custom SQL functions, (4) Create custom output renderers, (5) Add authentication or permissions logic, (6) Extend Datasette's UI with menus, actions, or templates, (7) Package a plugin for distribution on PyPI
DAW-specific quirks, known issues, and workarounds for Logic Pro, Ableton Live, Pro Tools, Cubase, Reaper, FL Studio, Bitwig with format-specific requirements (AU/VST3/AAX). Use when troubleshooting DAW compatibility, fixing host-specific bugs, implementing DAW workarounds, passing auval validation, or debugging automation issues.
Detection rules and grep patterns for database performance anti-patterns. Use when scanning codebase for N+1 queries, sequential queries, or connection pool issues.
Especialista Sênior em MongoDB, Segurança de Dados, Migrations, Backup/Recovery e Data Integrity. Guardian dos dados do Super Cartola Manager com foco em operações seguras, auditoria de schemas, otimização de queries e gestão de lifecycle de dados. Use para migrations, limpeza, manutenção, snapshots, índices, validações e qualquer operação crítica com banco de dados.
Lint PostgreSQL functions against schema, analyze usage, and generate fix reports; use when detecting broken functions, validating schema contracts, or cleaning up unused database functions
DBマイグレーション支援
How to create and use our alembic database migration tool. Use when making changes to models.py.
Patterns for optimizing database queries and preventing connection pool exhaustion. Use when writing batch operations, debugging slow queries, or reviewing code for performance.
Monitor database performance and prevent regressions.
PostgreSQL database management with Drizzle ORM, versioned migrations, and type-safe queries. This skill should be used when setting up a new database, writing migrations, managing schemas, or troubleshooting database issues in PostgreSQL projects.
This skill should be used when seeding databases with realistic fake data for development, testing, or staging environments. Supports PostgreSQL, MySQL, SQLite, MongoDB with ORM-based seeding (SQLAlchemy, Django, Prisma) and Faker library for generating realistic test data. Use when the user needs to populate databases with sample data, create test fixtures, or set up development/staging environments with realistic data.
SQLite database management with Prisma ORM, type-safe queries, and Railway deployment with Litestream backup. This skill should be used when creating database schemas, writing migrations, managing SQLite on Railway volumes, or troubleshooting database issues.
Execute SELECT queries on 30+ databases (SQLite, SQL Server, MySQL, PostgreSQL, Oracle, etc.) using DbCli. Returns data in JSON, table, or CSV format. Use when user needs to query databases, read data, or execute SELECT statements.
DBOS durable execution patterns and CRITICAL constraints for ChainGraph executor. Use when working on workflows, steps, execution, or any DBOS-related code. Contains MUST-FOLLOW constraints about what can be called from workflows vs steps. Triggers: dbos, workflow, step, durable, execution, startWorkflow, writeStream, recv, send, runStep, atomic, checkpoint, WorkflowQueue, queue, cancelWorkflow, Promise.allSettled. (project)
Complete guide for dbt data transformation including models, tests, documentation, incremental builds, macros, packages, and production workflows
PROACTIVE skill - STOP and invoke BEFORE writing dbt SQL. Validates models against coding conventions for staging, integration, and warehouse layers. Covers naming, SQL structure, field conventions, testing, and documentation. CRITICAL - When about to write .sql files in models/, invoke this skill first, write second. Supports project-specific convention overrides and sqlfluff integration.
Transform Google BigQuery DDL (views, tables, stored procedures) into production-quality dbt models
Define and enforce validation rules for dbt models during migration. This skill provides
Guide AI agents through the complete migration lifecycle from Snowflake or legacy database systems
Writes, edits, and creates dbt models following best practices. Use when user needs to create new dbt SQL models, update existing models, or convert raw SQL to dbt format. Handles staging, intermediate, and mart models with proper config blocks, CTEs, and documentation.
Comprehensive guide to dbt (data build tool) patterns, modeling best practices, testing strategies, and production workflows for modern data transformation
Transform AI agents into experts on dbt and Snowflake performance optimization, providing guidance
Provides expert-level assistance with dbt Semantic Layer, MetricFlow, semantic models, metrics, dimensions, entities, measures, and BI tool integrations. Use this skill when building semantic models, creating metrics (simple, ratio, cumulative, derived, conversion), debugging validation errors, or integrating with BI tools. Extracted from official dbt documentation and optimized for data practitioners.
ALWAYS USE when working with dbt models, SQL transformations, tests, snapshots, or macros. Use IMMEDIATELY when editing dbt_project.yml, profiles.yml, or creating SQL models. MUST be loaded before any transform-layer work. Enforces dbt owns SQL principle - never parse, validate, or transform SQL in Python.
Transform AI agents into experts on dbt testing strategies, providing guidance on implementing
dbt with TD Trino. Covers profiles.yml setup (method:none, user:TD_API_KEY), required override macros (no CREATE VIEW), TD_INTERVAL in models, and TD Workflow deployment.
Expert in D-Bus IPC (Inter-Process Communication) on Linux systems. Specializes in secure service communication, method calls, signal handling, and system integration. HIGH-RISK skill due to system service access and privileged operations.
Disk cloning, benchmarking, and file conversion tool with progress monitoring options.
Generate presentation layer components using routing-controllers with NestJS-style decorators, class-validator for validation, and automatic Swagger documentation.
Generate complete Domain-Driven Design bounded contexts with all 4 architectural layers (Domain, Application, Infrastructure, Presentation) for Bun.js + Express + routing-controllers backend applicati
Analyzes and refactors code using Domain-Driven Design principles. Use when refactoring domain models, identifying DDD anti-patterns, improving domain clarity, or applying tactical/strategic DDD patterns.
Créer des tests exhaustifs pour les bounded contexts DDD suivant une approche TDD (Test-Driven Development) avec des standards de coverage stricts.
This skill should be used to remove AI-generated artifacts and unnecessary code before committing. It scans changed files for redundant comments, AI TODOs, excessive docstrings, and unnecessary markdown files. Git-only, no GitHub required.
Identify and remove unused code, commented blocks, unreachable code, and unused imports. This skill should be used during Phase 1 cleanup tasks to improve codebase maintainability.
Store failed jobs for replay or manual inspection. Track failure patterns, enable manual intervention, and prevent data loss from processing errors.