To register project roots for Dart tooling access, add one or more root paths before using other Dart tools on those projects.
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →To register project roots for Dart tooling access, add one or more root paths before using other Dart tools on those projects.
To connect to the Dart Tooling Daemon for editor/runtime data, connect using a user-provided DTD URI before using related Dart tools.
To apply automated Dart fixes, run `dart fix --apply` on the given roots to resolve suggested changes.
To get the current cursor location from the connected editor, retrieve the active location after connecting to the Dart Tooling Daemon.
To read recent runtime errors from a running Dart or Flutter app, fetch runtime errors after connecting to the Dart Tooling Daemon.
To find the currently selected Flutter widget in the running app, get the selected widget after connecting to the Dart Tooling Daemon.
To search pub.dev for relevant Dart packages, query by keywords and return download counts, topics, license, and publisher.
To remove previously registered Dart project roots, revoke tool access by removing those roots.
To search for symbols across Dart workspaces, resolve a symbol name to find definitions or catch spelling errors.
To enable or disable widget selection mode in a running Flutter app, set selection mode after connecting to the Dart Tooling Daemon.
To see function or method signatures at a cursor position, get signature help for the API being called.
SWR-based data fetching and caching patterns used throughout the monorepo. Use this skill when implementing API interactions, creating custom data hooks, handling loading/error states, or working with mock data. Covers SWR configuration, custom hook patterns (useUserInfo, useTimesSquarePage), error handling, and mock data setup.
Rooms as pipeline nodes, exits as edges, objects as messages
Procedures and playbooks for responding to data quality incidents, data loss, corruption, and pipeline failures.
Bronze Layer(LLM抽出ログ層)とGold Layer(確定データ層)の2層アーキテクチャ設計。LLM抽出結果の履歴管理と人間修正の保護を実現。抽出処理の実装、ExtractionLogの使用、is_manually_verifiedフラグの扱いに関するガイダンスを提供。
This skill provides patterns for working with the data-layer module. Use when creating/editing files in src/data-layer/, src/lib/data/, or adding new data sources.
Service-scoped data orchestration for TMNL. Invoke when implementing search, data streams, kernel systems, or Effect-based DAQ. Covers hybrid dispatch (fibers + workers), Atom-as-State pattern, and progressive streaming.
Build orchestration pipelines with idempotency.
Monitor and troubleshoot dual-pipeline data collection systems on GCP. This skill should be used when checking pipeline health, viewing logs, diagnosing failures, or monitoring long-running operations for data collection workflows. Supports Cloud Run Jobs (batch pipelines) and VM systemd services (real-time streams).
Follow these patterns when implementing data pipelines, ETL, data ingestion, or data validation in OptAIC. Use for point-in-time (PIT) correctness, Arrow schemas, quality checks, and Prefect orchestration.
Set up database replication for high availability and disaster recovery. Use when configuring master-slave replication, multi-master setups, or replication monitoring.
Implementing comprehensive validation rules across database, application, and pipeline layers to ensure data integrity.
>
Upload estimation results to Supabase storage and register with Estimator API. Final phase of the estimation workflow.
Modern deployment with Databricks Asset Bundles (DAB), supporting multi-environment configurations and CI/CD integration.
Expert-level Databricks platform, Apache Spark, Delta Lake, MLflow, notebooks, and cluster management
datum-system
Transform AI agents into experts on dbt project architecture and medallion layer patterns, providing
Transform AI agents into experts on dbt materializations, providing guidance on choosing the right
Transform AI agents into experts on writing production-quality dbt models, providing guidance on CTE
Practical DDD patterns for Jakarta EE web applications with cognitive load distribution. Use when designing controllers, entities, services, or evaluating cohesion and load balance.
Guide for DDD strategic design - analyzing domains through structured questioning, conducting stakeholder interviews (PM/domain experts/users), and producing Bounded Context analysis, Context Maps, and Ubiquitous Language. Use when user needs help understanding domain boundaries, planning domain interviews, or structuring DDD strategic artifacts.
Web search via the DDGS metasearch library. Use for searching for unknown documentation, facts, or any web content. Lightweight, no browser required.
CRM integration for tracking deals through pipeline stages with automated status updates
決策樹助手工具。快速評估任務複雜度,提供派發建議。用於: (1) 任務複雜度快速評估, (2) 派發代理人建議, (3) 拆分策略建議, (4) 並行可行性評估
>
Neural networks, CNNs, RNNs, Transformers with TensorFlow and PyTorch. Use for image classification, NLP, sequence modeling, or complex pattern recognition.
Use this skill in the scenario of deep learning project development.
Create spike definitions with canonical names and numbered approaches for parallel exploratory implementation. Use when partner has an underdefined feature idea and wants to explore multiple implementation approaches in parallel, when uncertain which technical approach is best, or when comparing alternatives before committing to implementation
Split work across subagents with explicit contracts, interfaces, and merge strategies. Use when parallelizing tasks, distributing workload, or orchestrating multi-agent workflows.
Use when designing cloud infrastructure, CI/CD pipelines, or deployment strategies
Эксперт demand generation. Используй для стратегий генерации спроса, lead scoring, кампаний и funnel optimization.
dependency-vetting
Deploy Frappe HRMS code changes to AWS production. Use when you need to deploy Python/API changes to the Frappe Docker container.
Comprehensive guide for deploying Orient to production. Use this skill when deploying changes, updating production, fixing deployment failures, or rolling back. Covers pre-flight checks, environment variables, Docker compose configuration, CI/CD pipeline, smart change detection, and health verification.
Design and implement Azure cloud architectures using best practices for compute, storage, databases, AI services, networking, and governance. Use when building applications on Microsoft Azure or migrating workloads to Azure cloud platform.
Implement applications using Google Cloud Platform (GCP) services. Use when building on GCP infrastructure, selecting compute/storage/database services, designing data analytics pipelines, implementing ML workflows, or architecting cloud-native applications with BigQuery, Cloud Run, GKE, Vertex AI, and other GCP services.
Automates GitHub repository creation and Vercel deployment for Next.js websites. Use when deploying new websites, pushing to production, setting up CI/CD pipelines, or when the user mentions deployment, GitHub, Vercel, or going live.
Expert guide for deploying Next.js apps to Vercel, managing environments, CI/CD pipelines, and production best practices. Use when deploying, setting up automation, or managing production.
Serverless deployment with zero-downtime, multi-environment strategies, and infrastructure validation. Use when deploying Lambda functions, managing environments, or troubleshooting deployment failures.