autocli
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →autocli
article-publisher
> Complete technical SEO audit, fix, and monitoring system. From crawlability to Core Web Vitals to international SEO — everything search engines need to find, crawl, index, and rank your site.
Search Singapore property rental and sale listings with flexible filters. Use when asked to search Singapore properties, find rental or sale listings, check property prices near MRT stations, or compare commute times. Supports filtering by listing type (rent/sale), property type (HDB/Condo/Landed), bedrooms, bathrooms, price range, size, TOP year, MRT station codes, distance to MRT, room type, availability, and commute time to a destination. Outputs JSON to stdout.
Design websites and applications that AI agents can consume, navigate, and interact with. Use when building any site, app, or product that agents will use as an end-user — not just crawl or index. Covers semantic structure, accessibility-as-agent-interface, machine-readable data, API-first patterns, and the emerging protocols (llms.txt, MCP, NLWeb, A2UI) that make sites agent-ready. Triggers on: agent-friendly, agent-readable, agent-accessible, AX, agent experience, agentic web, dual-interface, machine-readable, llms.txt, MCP integration, NLWeb, accessibility tree, ARIA for agents, structured data, JSON-LD, Schema.org, API-first design, build for agents, agent-ready.
>
Smooth CLI is a browser for AI agents to interact with websites, authenticate, scrape data, and perform complex web-based tasks using natural language.
A browser-based YouTube channel discovery and scraping tool.
Enrich CRM contact records by filling missing fields from multiple sources. Works with DuckDB workspace entries or standalone JSON data.
Visual context analyzer for web pages. Provides AI agents with the ability to "see" web applications through screenshots, accessibility scans, DOM snapshots, and element descriptions.
Deep web scraping, screenshots, PDF parsing, and website crawling using Firecrawl API
Build and deploy Apify actors for web scraping and automation. Use for serverless scraping, data extraction, browser automation, and API integrations with Python.
Apify JS SDK Documentation - Web scraping, crawling, and Actor development
Social media scraping, business data, e-commerce via Apify actors. USE WHEN Twitter, Instagram, LinkedIn, TikTok, YouTube, Facebook, Google Maps, Amazon scraping.
Use bdg CLI for browser automation via Chrome DevTools Protocol. Provides direct CDP access (60+ domains, 300+ methods) for DOM queries, navigation, screenshots, network control, and JavaScript execution. Use this skill when you need to automate browsers, scrape dynamic content, or interact with web pages programmatically.
End-to-end automated daily competition workflow. Orchestrates scrape, analyze, compose, and notify skills - all unattended for cron execution.
This skill provides enterprise consulting-grade methodologies for conducting comprehensive company research, competitive analysis, and market intelligence using Bright Data's professional search and w
RAG pipeline, embeddings, LLM interactions, and flow orchestration.
Scrapes docs.snowflake.com sections to Markdown with SQLite caching (7-day expiration).
Single source of truth and librarian for ALL Claude official documentation. Manages local documentation storage, scraping, discovery, and resolution. Use when finding, locating, searching, or resolving Claude documentation; discovering docs by keywords, category, tags, or natural language queries; scraping from sitemaps or docs maps; managing index metadata (keywords, tags, aliases); or rebuilding index from filesystem. Run scripts to scrape, find, and resolve documentation. Handles doc_id resolution, keyword search, natural language queries, category/tag filtering, alias resolution, sitemap.xml parsing, docs map processing, markdown subsection extraction for internal use, hash-based drift detection, and comprehensive index maintenance.
Single source of truth and librarian for ALL Gemini CLI documentation. Manages local documentation storage, scraping, discovery, and resolution. Use when finding, locating, searching, or resolving Gemini CLI documentation; discovering docs by keywords, category, tags, or natural language queries; scraping from llms.txt; managing index metadata (keywords, tags, aliases); or rebuilding index from filesystem. Run scripts to scrape, find, and resolve documentation. Handles doc_id resolution, keyword search, natural language queries, category/tag filtering, alias resolution, llms.txt parsing, markdown subsection extraction for internal use, hash-based drift detection, and comprehensive index maintenance.
Autonomous agent for discovering, evaluating, and integrating relevant GitHub repositories into BidDeed.AI and Life OS ecosystems.
Automate Ideal Direct finance and supply chain SOPs with browser-based workflow guidance. Handles payroll, working hours, purchase orders, and audits using Playwright MCP.
Build and scale partner ecosystems that drive revenue and platform adoption. Use when building partner programs from scratch, tiering partnerships, managing co-marketing, making build-vs-partner decisions, or structuring crawl-walk-run partner deployment.
Universal Web Scraper workflow skill. Use this skill when the user needs AI-driven data extraction from 55+ Actors across all major platforms. This skill automatically selects the best Actor for your task and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
firecrawl-scraper workflow skill. Use this skill when the user needs Deep web scraping, screenshots, PDF parsing, and website crawling using Firecrawl API. Use when you need deep content extraction from web pages, page interaction is required (clicking, scrolling, etc.), or you want screenshots or PDF parsing and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
firecrawl-scraper workflow skill. Use this skill when the user needs Deep web scraping, screenshots, PDF parsing, and website crawling using Firecrawl API. Use when you need deep content extraction from web pages, page interaction is required (clicking, scrolling, etc.), or you want screenshots or PDF parsing and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
Playwright Go Automation Expert workflow skill. Use this skill when the user needs Expert capability for robust, stealthy, and efficient browser automation using Playwright Go and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
Playwright Go Automation Expert workflow skill. Use this skill when the user needs Expert capability for robust, stealthy, and efficient browser automation using Playwright Go and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
Go-Rod Browser Automation Master workflow skill. Use this skill when the user needs Comprehensive guide for browser automation and web scraping with go-rod (Chrome DevTools Protocol) including stealth anti-bot-detection patterns and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
Go-Rod Browser Automation Master workflow skill. Use this skill when the user needs Comprehensive guide for browser automation and web scraping with go-rod (Chrome DevTools Protocol) including stealth anti-bot-detection patterns and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
Indexing Issue Auditor & Technical SEO Architect workflow skill. Use this skill when the user needs High-level technical SEO and site architecture auditor. Invoke to scan local or live environments for indexing, crawl budget, and structural errors and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
Indexing Issue Auditor & Technical SEO Architect workflow skill. Use this skill when the user needs High-level technical SEO and site architecture auditor. Invoke to scan local or live environments for indexing, crawl budget, and structural errors and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
Prometheus Configuration workflow skill. Use this skill when the user needs Complete guide to Prometheus setup, metric collection, scrape configuration, and recording rules and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
Technical SEO Audit workflow skill. Use this skill when the user needs Audit technical SEO across crawlability, indexability, security, URLs, mobile, Core Web Vitals, structured data, JavaScript rendering, and related platform signals like robots.txt and AI crawler access and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
X (Twitter) Scraper \u2014 Xquik workflow skill. Use this skill when the user needs X (Twitter) data platform skill \u2014 tweet search, user lookup, follower extraction, engagement metrics, giveaway draws, monitoring, webhooks, 19 extraction tools, MCP server and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
ADHX - X/Twitter Post Reader workflow skill. Use this skill when the user needs Fetch any X/Twitter post as clean LLM-friendly JSON. Converts x.com, twitter.com, or adhx.com links into structured data with full article content, author info, and engagement metrics. No scraping or browser required and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
Lead Generation workflow skill. Use this skill when the user needs Scrape leads from multiple platforms using Apify Actors and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
Lead Generation workflow skill. Use this skill when the user needs Scrape leads from multiple platforms using Apify Actors and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
Universal Web Scraper workflow skill. Use this skill when the user needs AI-driven data extraction from 55+ Actors across all major platforms. This skill automatically selects the best Actor for your task and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
Dispatcher skill. Your job (as the main thread) is to **mentor** the `domain-playwright-lead` subagent while it builds a production-grade Python Playwright script by driving MCP browser tools one veri
The CLI uses Chrome/Chromium via CDP directly. Install via `npm i -g agent-browser`, `brew install agent-browser`, or `cargo install agent-browser`. Run `agent-browser install` to download Chrome. Exi
>
>
Prepare for investor calls by pulling upcoming meetings from Google Calendar, deeply researching each investor and their firm (website scraping, portfolio analysis, thesis extraction), checking for competitor conflicts, and outputting an honest prep sheet with compatibility assessments. Use when asked to prep for investor meetings, fundraising calls, VC meetings, or demo day.
>
Publish tweets and threads to X draft using Playwright browser automation.
>
Fetch any X/Twitter post as clean LLM-friendly JSON. Converts x.com, twitter.com, or adhx.com links into structured data with full article content, author info, and engagement metrics. No scraping or browser required.
Complete guide to Prometheus setup, metric collection, scrape configuration, and recording rules.