>-
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →>-
Browser automation CLI for AI agents. Use for website interaction, form automation, screenshots, scraping, and web app verification. Prefer snapshot refs (@e1, @e2) for deterministic actions.
Internal structural hygiene for judgment-bearing outputs.
Interact with Slack workspaces to check messages, extract data, and automate common tasks.
'Apply serverless FaaS patterns for event-driven workloads with minimal infrastructure. Use when cost scales with usage.'
'Autonomous orchestrator processing manifest work items through the development lifecycle with budget tracking.'
'Import external documents (PDF, DOCX, PPTX, XLSX, HTML) into editable markdown for rewriting or project integration.'
'Convert a Claude Code session into a shareable blog post or case study capturing decisions, process, and outcomes.'
Top-level driver for the PaperOrchestra pipeline. Read this document and follow
Faithful implementation of the Plotting Agent from PaperOrchestra
Provides comprehensive Tailwind CSS utility-first styling patterns including responsive design, layout utilities, flexbox, grid, spacing, typography, colors, and modern CSS best practices. Use when styling React/Vue/Svelte components, building responsive layouts, implementing design systems, or optimizing CSS workflow.
- **Skill Name**: file-upload
>
>
>
Complete API integration guide for Shopify including GraphQL Admin API, REST Admin API, Storefront API, Ajax API, OAuth authentication, rate limiting, and webhooks. Use when making API calls to Shopify, authenticating apps, fetching product/order/customer data programmatically, implementing cart operations, handling webhooks, or working with API version 2025-10. Requires fetch or axios for JavaScript implementations.
Internal ClaudeMap runtime for turning a repository into a live architecture map and driving that map during walkthroughs. Prefer the public commands in .claude/commands for normal use.
Legal methods for accessing paywalled and geo-blocked content. Use when researching behind paywalls, accessing academic papers, bypassing geographic restrictions, or finding open access alternatives. Covers Unpaywall, library databases, VPNs, and ethical access strategies for journalists and researchers.
Digital archiving workflows with AI enrichment, entity extraction, and knowledge graph construction. Use when building content archives, implementing AI-powered categorization, extracting entities and relationships, or integrating multiple data sources. Covers patterns from the Jay Rosen Digital Archive project.
Use this skill when creating new files that represent architectural decisions — data models, infrastructure configs, auth boundaries, API contracts, CI/CD pipelines, or event systems. Flags irreversible decisions and forces a discussion about trade-offs before committing.
Python data processing pipelines with modular architecture. Use when building content processing workflows, implementing dispatcher patterns, integrating Google Sheets/Drive APIs, or creating batch processing systems. Covers patterns from rosen-scraper, image-analyzer, and social-scraper projects.
Web scraping with anti-bot bypass, content extraction, undocumented APIs and poison pill detection. Use when extracting content from websites, handling paywalls, implementing scraping cascades or processing social media. Covers requests, trafilatura, Playwright with stealth mode, yt-dlp and instaloader patterns.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Amazon keyword research and market opportunity analysis for sellers. Retrieve autocomplete suggestions (long-tail keywords), analyze competitor landscape, and assess market opportunity for any keyword on 12 Amazon marketplaces (US/UK/DE/FR/IT/ES/JP/CA/AU/IN/MX/BR). No API key required. Make sure to use this skill whenever the user mentions Amazon product research, finding products to sell on Amazon, Amazon keyword ideas, niche analysis, competition analysis for Amazon, market opportunity on Amazon, comparing Amazon keywords, evaluating whether a product is worth selling, Amazon autocomplete data, seasonal demand for Amazon products, or anything related to researching what to sell on Amazon — even if they don't explicitly say 'keyword research'. Also trigger when the user asks vague questions like 'is this a good product to sell?', 'what's the competition like for X on Amazon?', 'should I sell X or Y?', or 'what are people searching for on Amazon?'.
Survival analysis and time-to-event modeling with scikit-survival. Cox proportional hazards (standard/elastic net), Random Survival Forests, Gradient Boosting, SVMs for censored data. C-index (Harrell/Uno), Brier score, time-dependent AUC evaluation. Kaplan-Meier, Nelson-Aalen, competing risks. scikit-learn Pipeline/GridSearchCV compatible. For frequentist regression use statsmodels; for Bayesian survival use pymc; for simpler parametric models use lifelines.
Pure Python DICOM library for medical imaging (CT, MRI, X-ray, ultrasound). Read/write DICOM files, extract pixel data as NumPy arrays, access/modify metadata tags, apply windowing (VOI LUT), anonymize PHI, build DICOM from scratch, process series into 3D volumes. For whole-slide pathology images use histolab; for NIfTI neuroimaging use nibabel.
Annotated data matrices for single-cell genomics. AnnData stores expression data (X) with observation metadata (obs), variable metadata (var), layers, embeddings (obsm/varm), graphs (obsp/varp), and unstructured data (uns). Use for .h5ad/.zarr file handling, dataset concatenation, and scverse ecosystem integration. For analysis workflows use scanpy; for probabilistic models use scvi-tools.
Geniml is a Python library for genomic interval machine learning. Train and apply region2vec embeddings to convert BED file regions into numeric vectors, load and index genomic interval datasets for ML pipelines, search embedding spaces with BEDSpace, and evaluate embedding quality. Use for chromatin accessibility clustering, regulatory element classification, cross-sample region comparison, and building ML models on genomic intervals.
Python API v2 for programming Opentrons OT-2 and Flex liquid handling robots. Write protocols as Python files with metadata and a run() function; control pipettes, labware, and hardware modules (thermocycler, heater-shaker, magnetic, temperature). Simulate locally with opentrons_simulate, then upload to the robot app. Use PyLabRobot instead for hardware-agnostic scripts that run on Hamilton, Tecan, or other vendors.
>-
PyTorch Geometric (PyG) for graph neural networks. Node classification, graph classification, link prediction with GCN, GAT, GraphSAGE, GIN layers. Message passing framework, mini-batch processing, heterogeneous graphs, neighbor sampling for large-scale learning, model explainability. Supports molecular property prediction (QM9, MoleculeNet), social networks, knowledge graphs, 3D point clouds. For non-graph deep learning use PyTorch directly; for traditional graph algorithms use NetworkX.
>-
Access US Patent and Trademark Office (USPTO) patent data via the PatentsView REST API and Google Patents Public Data (BigQuery). Use it to search patents by inventor, assignee, CPC classification, or keywords; download full patent metadata and claims; analyze patent portfolios; and track technology trends. Ideal for IP landscape analysis, competitor monitoring, prior art searches, and technology forecasting in life sciences and biotech.
Search and retrieve cryo-EM density maps, fitted atomic models, and metadata from the Electron Microscopy Data Bank (EMDB) REST API. Query by keyword, resolution, method, or organism; fetch entry details, map download URLs, associated PDB models, and publications. No authentication required. For experimental atomic coordinates use pdb-database; for AlphaFold predicted structures use alphafold-database-access.
Molecular featurization hub (100+ featurizers) for ML. Convert SMILES to numerical representations via fingerprints (ECFP, MACCS, MAP4), descriptors (RDKit 2D, Mordred), pretrained models (ChemBERTa, GIN, Graphormer), and pharmacophore features. Scikit-learn compatible transformers with parallelization, caching, and state persistence. For QSAR, virtual screening, similarity search, and deep learning on molecules.
Query RCSB PDB (200K+ experimental structures) via rcsb-api Python SDK. Text, attribute, sequence, and structure similarity search. Fetch metadata via Schema or GraphQL. Download PDB/mmCIF coordinate files. For AlphaFold predicted structures use alphafold-database-access.