Database schema design for PostgreSQL/MySQL with normalization, relationships, constraints. Use for new databases, schema reviews, migrations, or encountering missing PKs/FKs, wrong data types, premature denormalization, EAV anti-pattern.
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →Database schema design for PostgreSQL/MySQL with normalization, relationships, constraints. Use for new databases, schema reviews, migrations, or encountering missing PKs/FKs, wrong data types, premature denormalization, EAV anti-pattern.
Design robust, scalable database schemas for SQL and NoSQL databases. Provides normalization guidelines, indexing strategies, migration patterns, constraint design, and performance optimization. Ensures data integrity, query performance, and maintainable data models.
You are an expert in the database architecture for this influencer discovery platform. This skill provides comprehensive knowledge about the Postgres schema, Drizzle ORM patterns, normalized user tabl
Extend the Supabase PostgreSQL database schema following this project's declarative schema patterns, migration workflow, and type generation pipeline. Use when adding tables, columns, enums, RLS policies, triggers, or database functions.
VaultCPA database schema reference with 50+ Prisma models, relationships, and query patterns. Use when working with database models, designing features, or understanding data structure.
Validates database schemas, Kysely types, and migrations. Use when checking schema correctness or migration safety.
Enforces project database schema conventions when creating or modifying Drizzle ORM table definitions, including constraints, indexes, relations, and column patterns. This skill should be used proactively whenever working with schema files to ensure consistent schema design for PostgreSQL with Neon serverless.
Create robust, error-proof Prisma seed scripts with comprehensive error handling and validation.
Database schema validation, data integrity testing, migration testing, transaction isolation, and query performance. Use when testing data persistence, ensuring referential integrity, or validating database migrations.
Language-agnostic database best practices covering migrations, schema design, ORM patterns, query optimization, and testing strategies. Activate when working with database files, migrations, schema changes, SQL, ORM code, database tests, or when user mentions migrations, schema design, SQL optimization, NoSQL, database patterns, or connection pooling.
Database workflows - schema design, migrations, query optimization. Use when designing schemas, reviewing migrations, optimizing queries, preventing N+1 problems, or working with ORMs like Prisma, Drizzle, and TypeORM.
RDBMS access patterns for DuckDB, MySQL (keycloak), PostgreSQL (dw, x3rocs), SQL Server (sage1000, x3), and DBISAM (Exportmaster) using ODBC and native drivers
Use when working with ES/NQ futures market data, before calling any Databento API - follow mandatory four-step workflow (cost check, availability check, fetch, validate); prevents costly API errors and ensures data quality
Databricks development guidance including Python SDK, Databricks Connect, CLI, and REST API. Use when working with databricks-sdk, databricks-connect, or Databricks APIs.
Execute SQL queries against Databricks using the DBSQL MCP server. Use when querying Unity Catalog tables, running SQL analytics, exploring Databricks data, or when user mentions Databricks queries, SQL execution, Unity Catalog, or data warehouse operations. Handles query execution, result formatting, and error handling.
Python dataclass best practices: slots, frozen, validation. Trigger when optimizing dataclasses or creating config classes.
Dataclass patterns including frozen dataclasses, slots, immutability, and value objects. Activated when designing data classes or value types.
Troubleshoot Datadog API authentication issues (401/403 errors), understand API keys vs app keys, and configure correct regions. Use when hitting auth errors or setting up Datadog API access.
Automatically detects Datadog resource mentions (URLs, service queries, natural language) and intelligently fetches condensed context via datadog-analyzer subagent when needed for the conversation (plugin:schovi@schovi-workflows)
Work with Datadog Incidents API - fetch incidents with enrichment, update incident fields, understand incident data structures. Use when working with incidents or the dd get-incident/update-incident commands.
Use this skill when you need to search Datadog logs, query metrics, tail logs in real-time, trace distributed requests, investigate errors, compare time periods, find log patterns, check service health, or export observability data.
Query logs, metrics, monitors, and dashboards from Datadog. Search logs, check alert status, and investigate incidents.
Kailash DataFlow - zero-config database framework with automatic model-to-node generation. Use when asking about 'database operations', 'DataFlow', 'database models', 'CRUD operations', 'bulk operations', 'database queries', 'database migrations', 'multi-tenancy', 'multi-instance', 'database transactions', 'PostgreSQL', 'MySQL', 'SQLite', 'MongoDB', 'pgvector', 'vector search', 'document database', 'RAG', 'semantic search', 'existing database', 'database performance', 'database deployment', 'database testing', or 'TDD with databases'. DataFlow is NOT an ORM - it generates 11 workflow nodes per SQL model, 8 nodes for MongoDB, and 3 nodes for vector operations.
Use when developing BigQuery Dataform transformations, SQLX files, source declarations, or troubleshooting pipelines - enforces TDD workflow (tests first), ALWAYS use ${ref()} never hardcoded table paths, comprehensive columns:{} documentation, safety practices (--schema-suffix dev, --dry-run), proper ref() syntax, .sqlx for new declarations, no schema config in operations/tests, and architecture patterns that prevent technical debt under time pressure
Reviews SQL queries and DataFrame operations for optimization opportunities including predicate pushdown, partition pruning, column projection, and join ordering. Activates when users write DataFusion queries or experience slow query performance.
Datalog bottom-up fixpoint iteration for recursive queries
Curate and clean training datasets for high-quality machine learning
Create, clean, and optimize datasets for LLM fine-tuning. Covers formats (Alpaca, ShareGPT, ChatML), synthetic data generation, quality assessment, and augmentation. Use when preparing data for training.
Guide for writing Datasette plugins. This skill should be used when users want to create or develop plugins for Datasette, including information about plugin hooks, the cookiecutter template, database APIs, request/response handling, and plugin configuration.
Writing Datasette plugins using Python and the pluggy plugin system. Use when Claude needs to: (1) Create a new Datasette plugin, (2) Implement plugin hooks like prepare_connection, register_routes, render_cell, etc., (3) Add custom SQL functions, (4) Create custom output renderers, (5) Add authentication or permissions logic, (6) Extend Datasette's UI with menus, actions, or templates, (7) Package a plugin for distribution on PyPI
description: Create or update Dataverse schema components (entities, attributes, relationships, option sets) via Web API
Deploy solutions, PCF controls, and web resources to Dataverse using PAC CLI
This skill provides guidance on designing Dataverse table schemas and data models. Use when users ask about "Dataverse table design", "Dataverse schema", "Dataverse relationships", "Dataverse columns"
date-timezone
Use when editing Planning Hubs, timelines, calendars, or any file with day-name + date combinations (Wed Nov 12), relative dates (tomorrow), or countdowns (18 days until) - validates day-of-week accuracy, relative date calculations, and countdown math with two-source ground truth verification before allowing edits
Handle datetime and timezone conversions correctly across frontend, API, and database. Use this skill when working with datetime-local inputs, scheduling features, due dates, reminders, or any time-sensitive functionality. Covers the critical pitfall of browser datetime-local inputs lacking timezone info.
DAW-specific quirks, known issues, and workarounds for Logic Pro, Ableton Live, Pro Tools, Cubase, Reaper, FL Studio, Bitwig with format-specific requirements (AU/VST3/AAX). Use when troubleshooting DAW compatibility, fixing host-specific bugs, implementing DAW workarounds, passing auval validation, or debugging automation issues.
Detection rules and grep patterns for database performance anti-patterns. Use when scanning codebase for N+1 queries, sequential queries, or connection pool issues.
MongoDB database exploration and querying. Use when you need to understand database structure, view existing data, check collection schemas, count documents, or run queries to investigate the database state. (project)
Especialista Sênior em MongoDB, Segurança de Dados, Migrations, Backup/Recovery e Data Integrity. Guardian dos dados do Super Cartola Manager com foco em operações seguras, auditoria de schemas, otimização de queries e gestão de lifecycle de dados. Use para migrations, limpeza, manutenção, snapshots, índices, validações e qualquer operação crítica com banco de dados.
Lint PostgreSQL functions against schema, analyze usage, and generate fix reports; use when detecting broken functions, validating schema contracts, or cleaning up unused database functions
How to create and use our alembic database migration tool. Use when making changes to models.py.
Patterns for optimizing database queries and preventing connection pool exhaustion. Use when writing batch operations, debugging slow queries, or reviewing code for performance.
Monitor database performance and prevent regressions.
PostgreSQL database management with Drizzle ORM, versioned migrations, and type-safe queries. This skill should be used when setting up a new database, writing migrations, managing schemas, or troubleshooting database issues in PostgreSQL projects.
Comprehensive guide to optimizing database queries, understanding EXPLAIN plans, indexing strategies, and eliminating N+1 queries
This skill should be used when seeding databases with realistic fake data for development, testing, or staging environments. Supports PostgreSQL, MySQL, SQLite, MongoDB with ORM-based seeding (SQLAlchemy, Django, Prisma) and Faker library for generating realistic test data. Use when the user needs to populate databases with sample data, create test fixtures, or set up development/staging environments with realistic data.
SQLite database management with Prisma ORM, type-safe queries, and Railway deployment with Litestream backup. This skill should be used when creating database schemas, writing migrations, managing SQLite on Railway volumes, or troubleshooting database issues.
DBOS durable execution patterns and CRITICAL constraints for ChainGraph executor. Use when working on workflows, steps, execution, or any DBOS-related code. Contains MUST-FOLLOW constraints about what can be called from workflows vs steps. Triggers: dbos, workflow, step, durable, execution, startWorkflow, writeStream, recv, send, runStep, atomic, checkpoint, WorkflowQueue, queue, cancelWorkflow, Promise.allSettled. (project)
Transform AI agents into experts on dbt project architecture and medallion layer patterns, providing