Create and update Linear issues via CLI (write operations)
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →Create and update Linear issues via CLI (write operations)
>-
List Linear project milestones via CLI (read-only operations)
Create and update Linear project milestones via CLI (write operations)
List and get Linear projects via CLI (read-only operations)
Create and update Linear projects via CLI (write operations)
Add project repository links to PROJECTS.md when user mentions completing, starting, or sharing a hands-on project. Use when user references external repos or implementation work.
Internal linking strategy and anchor text optimization patterns. Use when planning internal links or optimizing site structure.
Esta skill deve ser usada quando o usuário quiser criar, gerenciar ou otimizar posts no LinkedIn. Utiliza o MCP do LinkedIn para criar posts via API oficial, sem necessidade de automação de navegador. Ideal para criar conteúdo profissional, técnico ou promocional com boas práticas de engajamento.
Expert assistance with Linux bash commands, shell scripting, system administration, file operations, and command-line utilities. Use this when working with bash scripts, Linux system operations, or command-line tasks.
Filesystem operations within workspace boundaries
Systemd service management and log access
---name: liquid-biopsy-analytics-agent
Apple's Liquid Glass design system for iOS 26+ and iPadOS 26+. Use when: (1) building iOS 26+ UI with glassEffect, (2) implementing GlassEffectContainer for multiple elements, (3) working with glass morphing transitions, (4) integrating glass effects in navigation layers, (5) ensuring glass effect accessibility, (6) migrating from UIKit to SwiftUI glass APIs.
Build comprehensive randomization lists for creative entropy. Use when you need to create or expand lists of story elements (professions, locations, objects, names, etc.) for use with entropy tools. Leverages research sources like Kiwix/Wikipedia to build lists with good variety and size.
List all available projects in the Conductor workspace with status summaries.
Display all available skills organized by category with descriptions.
Guide for developing Lit web components in the Common UI v2 system (@commontools/ui/v2). Use when creating or modifying ct- prefixed components, implementing theme integration, working with Cell abstractions, or building reactive UI components that integrate with the Common Tools runtime.
Unified LLM API with LiteLLM. Call 100+ LLM providers with one interface. Use for multi-provider AI, cost optimization, fallbacks, and LLM gateway deployment.
AI agent skills for working with the Lithent library.
Little Schemer Skill
Delivering real-time updates to users via WebSocket, SSE, or Push API for live notification systems with proper architecture, queuing, and delivery mechanisms.
Real-time video broadcasting using RTMP, HLS, WebRTC protocols with streaming servers and cloud platforms for low-latency live video delivery.
LiveKit omni-modal continuous coaching with stick-breaking color selection,
Expert in live streaming, WebRTC, and real-time video/audio
Build LLM applications with LlamaIndex. Create indexes, query engines, and data connectors. Use for RAG applications, document search, and knowledge base systems.
Large Language and Vision Assistant. Enables visual instruction tuning and image-based conversations. Combines CLIP vision encoder with Vicuna/LLaMA language models. Supports multi-turn image chat, visual question answering, and instruction following. Use for vision-language chatbots or image understanding tasks. Best for conversational image analysis.
Automatically applies when building LLM applications. Ensures proper async patterns for LLM calls, streaming responses, token management, retry logic, and error handling.
This skill should be used when users want to build LLM-powered applications using LangChain. It provides patterns for initializing any LLM provider (OpenAI, Anthropic, Google, xAI), building agent loops with tools, and implementing structured output. Use this skill when users ask to create chatbots, AI agents, or applications that need LLM integration with tool calling or structured responses.
Guidelines for working with LLM context stored in the .llm/ directory.
Reduce LLM API costs without sacrificing quality. Covers prompt caching (Anthropic), local response caching, prompt compression, debouncing triggers, and cost analysis. Use when building LLM-powered features, analyzing API costs, optimizing prompts, or implementing caching strategies.
LLM content governance and compliance standards. Use when llm governance guidance is required.
Comprehensive guide to LLM safety and guardrails implementation for AI systems.
Use when wanting to interact with any LLM - Explains available inference endpoints so the agent selects suitable models.
Comprehensive guide to using LLMs as judges for automated evaluation including prompt patterns, calibration, bias reduction, and multi-judge ensembles
Comprehensive guide to securing LLM applications including prompt injection prevention, jailbreak detection, guardrails, and red teaming methodologies
LLM inference infrastructure, serving frameworks (vLLM, TGI, TensorRT-LLM), quantization techniques, batching strategies, and streaming response patterns. Use when designing LLM serving infrastructure, optimizing inference latency, or scaling LLM deployments.
LLMs, prompt engineering, RAG systems, LangChain, and AI application development
Research best practices via MCP Ref/Context7/WebSearch and create documentation (guide/manual/ADR/research). Single research, multiple output types.
Creates design_guidelines.md for frontend projects. L3 Worker invoked CONDITIONALLY when hasFrontend detected.
LobeChat - Open-source AI agent workspace with multi-provider LLM support, plugin system, knowledge base RAG, 505+ agents, and self-hosting options via Docker/Vercel
Access LobeChat for AI chat, knowledge base queries, and multi-model routing.
Run local BLAST searches using BLAST+ command-line tools. Use when running fast unlimited searches, building custom databases, performing large-scale analysis, or when NCBI servers are slow or unavailable.
Enforces local-first architecture principles for Breath of Now. Use this skill when working with data, state management, or sync features. Ensures IndexedDB (Dexie.js) is always the source of truth.
Comprehensive guide for using Local Skills MCP - creating skills in the right locations, understanding skill directories, setup, and configuration. Use when creating new skills, deciding where to save skills, setting up the MCP server, or understanding how skill aggregation works.
Set up and manage local skills for automatic matching and invocation
Reference for LocalStack AWS service availability by tier (Free/Base/Ultimate). Essential for KECS development to understand which AWS-compatible services can be used locally without cost.
Designs location-based augmented reality experiences with geospatial anchoring, GPS integration, and real-world interactive overlays.
Set up centralized logging with ELK, Loki, or Splunk for log management
log-product-standards-issues