dagster-orchestration
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →dagster-orchestration
Expert on building decentralized applications with Shelby Protocol storage on Aptos. Helps with dApp architecture, wallet integration (Petra), browser SDK usage, React/Vue integration, file uploads, content delivery, and building Shelby-powered applications. Triggers on keywords Shelby dApp, build on Shelby, Shelby application, Petra wallet, browser storage, web3 app, decentralized app Shelby, React Shelby, Vue Shelby.
To register project roots for Dart tooling access, add one or more root paths before using other Dart tools on those projects.
To connect to the Dart Tooling Daemon for editor/runtime data, connect using a user-provided DTD URI before using related Dart tools.
To apply automated Dart fixes, run `dart fix --apply` on the given roots to resolve suggested changes.
To read recent runtime errors from a running Dart or Flutter app, fetch runtime errors after connecting to the Dart Tooling Daemon.
To search pub.dev for relevant Dart packages, query by keywords and return download counts, topics, license, and publisher.
To remove previously registered Dart project roots, revoke tool access by removing those roots.
To search for symbols across Dart workspaces, resolve a symbol name to find definitions or catch spelling errors.
To enable or disable widget selection mode in a running Flutter app, set selection mode after connecting to the Dart Tooling Daemon.
To see function or method signatures at a cursor position, get signature help for the API being called.
Build production-grade interactive dashboards with Plotly Dash - enterprise features, callbacks, and scalable deployment
Create a new screen in the Multi-site Dashboard with automatic route registration
Мультиаккаунтный дашборд. Статистика по всем аккаунтам с детализацией до уровня объявлений.
Guide for adding features to the Orient dashboard across backend and frontend.
V1.0 - Expert guidance for the Dashboard Mail module supporting Outlook, Gmail, and custom IMAP providers.
Dashboard symbol_signals uses parallel lists (symbols[], signal_values[], gate_statuses[]) not dict keyed by symbol. Trigger when: (1) 'list' object has no attribute 'get', (2) .items() on symbol_signals fails.
Best-practice kit for BI dashboard layout, storytelling, and adoption.
Trading dashboard P&L visualization with profit tracker integration, win-rate overlays, R-multiples, and configurable settings
Auto-discover dashboard symbols from loaded RL models. Trigger when: (1) dashboard shows old/wrong symbols, (2) symbols mismatch between live trader and dashboard, (3) adding new models to system, (4) dashboard shows NO_MODEL for all symbols.
Aggregate and merge data from multiple sources including App Store sales, GitHub commits, Skillz events, and more. Use when combining data for reports, dashboards, or analysis.
Master machine learning, data engineering, AI engineering, MLOps, and prompt engineering. Build intelligent systems from data pipelines to production AI applications with LLMs, agents, and modern frameworks.
Master machine learning, data engineering, AI engineering, LLMs, prompt engineering, and MLOps. Build intelligent systems with Python.
SQL for data analysis with exploratory analysis, advanced aggregations, statistical functions, outlier detection, and business insights. 50+ real-world analytics queries.
AI-powered data analysis for Empathy Ledger - themes, quotes, story suggestions, transcript analysis.
data-analyst-export
Use this skill when the user needs to analyze, clean, or prepare datasets. Helps with listing columns, detecting data types (text, categorical, ordinal, numeric), identifying data quality issues, and cleaning values that don't fit expected patterns. Invoke when users mention data cleaning, data quality, column analysis, type detection, or preparing datasets.
Clean and standardize vehicle insurance data following established business rules.
Data cleaning, preprocessing, and quality assurance techniques
Implement strong encryption using AES, RSA, TLS, and proper key management. Use when securing data at rest, in transit, or implementing end-to-end encryption.
Explore and analyze pilot data sets to uncover patterns, anomalies, and initial insights. Use when performing ad-hoc data investigations, validating data quality, or preparing exploratory visualizations for hypothesis generation.
SWR-based data fetching and caching patterns used throughout the monorepo. Use this skill when implementing API interactions, creating custom data hooks, handling loading/error states, or working with mock data. Covers SWR configuration, custom hook patterns (useUserInfo, useTimesSquarePage), error handling, and mock data setup.
Rooms as pipeline nodes, exits as edges, objects as messages
Data governance strategy, quality validation rules, and data dictionary management for vehicle insurance platform. Use when defining data quality standards, implementing validation rules, managing field mappings, resolving data conflicts, or establishing data governance processes. Covers data cleaning standards, quality metrics, and mapping management.
Procedures and playbooks for responding to data quality incidents, data loss, corruption, and pipeline failures.
Bronze Layer(LLM抽出ログ層)とGold Layer(確定データ層)の2層アーキテクチャ設計。LLM抽出結果の履歴管理と人間修正の保護を実現。抽出処理の実装、ExtractionLogの使用、is_manually_verifiedフラグの扱いに関するガイダンスを提供。
This skill provides patterns for working with the data-layer module. Use when creating/editing files in src/data-layer/, src/lib/data/, or adding new data sources.
Service-scoped data orchestration for TMNL. Invoke when implementing search, data streams, kernel systems, or Effect-based DAQ. Covers hybrid dispatch (fibers + workers), Atom-as-State pattern, and progressive streaming.
Metabase REST API automation and troubleshooting: authenticate (API key preferred, session fallback), export/upsert questions (cards) and dashboards, standardize visualization_settings, and run/export results.
Build orchestration pipelines with idempotency.
Monitor and troubleshoot dual-pipeline data collection systems on GCP. This skill should be used when checking pipeline health, viewing logs, diagnosing failures, or monitoring long-running operations for data collection workflows. Supports Cloud Run Jobs (batch pipelines) and VM systemd services (real-time streams).
Follow these patterns when implementing data pipelines, ETL, data ingestion, or data validation in OptAIC. Use for point-in-time (PIT) correctness, Arrow schemas, quality checks, and Prefect orchestration.
Polibaseのデータ処理ワークフローとパイプラインを説明します。議事録処理、Web scraping、政治家データ収集、話者マッチングなどの処理フロー、依存関係、実行順序を理解する際にアクティベートされます。
Process JSON with jq and YAML/TOML with yq. Filter, transform, query structured data efficiently. Triggers on: parse JSON, extract from YAML, query config, Docker Compose, K8s manifests, GitHub Actions workflows, package.json, filter data.
Set up database replication for high availability and disaster recovery. Use when configuring master-slave replication, multi-master setups, or replication monitoring.
Documentation of available data science libraries (scipy, numpy, pandas, sklearn) and best practices for statistical analysis, regression modeling, and organizing analysis scripts. **CRITICAL:** All analysis scripts MUST be placed in reports/{topic}/scripts/, NOT in root scripts/ directory.
Data science and analytics expertise for statistical analysis, machine learning pipelines, data governance, business intelligence, predictive modeling, and analytics strategy. Use when building ML models, analyzing data, creating dashboards, or designing data architectures.
Connect your own data source to replace the demo unicorns data. Use when the user wants to use their own database URL or CSV file instead of the sample data. Triggers on requests to connect database, import CSV, change data source, use own data, or switch from demo data.
This skill should be used when reading any tabular data file (Excel, CSV, Parquet, ODS). It automatically detects and fixes common data issues including multi-level headers, encoding problems, empty rows/columns, and data type mismatches. Returns a clean DataFrame ready for analysis with zero user intervention.
Implementing comprehensive validation rules across database, application, and pipeline layers to ensure data integrity.