Conduct comprehensive SSH security assessments including enumeration, credential attacks, vulnerability exploitation, tunneling techniques, and post-exploitation activities. This skill covers the complete methodology for testing SSH service security.
Provide systematic methodologies for discovering and exploiting privilege escalation vulnerabilities on Windows systems during penetration testing engagements.
Execute comprehensive client-side injection vulnerability assessments on web applications to identify XSS and HTML injection flaws, demonstrate exploitation techniques for session hijacking and credential theft, and validate input sanitization and output encoding mechanisms.
Especialista profundo em Claude Code - CLI da Anthropic. Maximiza produtividade com atalhos, hooks, MCPs, configuracoes avancadas, workflows, CLAUDE.md, memoria, sub-agentes, permissoes e integracao com ecossistemas.
Query BindingDB for measured drug-target binding affinities (Ki, Kd, IC50, EC50). Search by target (UniProt ID), compound (SMILES/name), or pathogen. Essential for drug discovery, lead optimization, polypharmacology analysis, and structure-activity relationship (SAR) studies.
Query JASPAR for transcription factor binding site (TFBS) profiles (PWMs/PFMs). Search by TF name, species, or class; scan DNA sequences for TF binding sites; compare matrices; essential for regulatory genomics, motif analysis, and GWAS regulatory variant interpretation.
Query the Monarch Initiative knowledge graph for disease-gene-phenotype associations across species. Integrates OMIM, ORPHANET, HPO, ClinVar, and model organism databases. Use for rare disease gene discovery, phenotype-to-gene mapping, cross-species disease modeling, and HPO term lookup.
Automate web scraping and data extraction with Apify -- run Actors, manage datasets, create reusable tasks, and retrieve crawl results through the Composio Apify integration.
Automate Apollo.io lead generation -- search organizations, discover contacts, enrich prospect data, manage contact stages, and build targeted outreach lists -- using natural language through the Composio MCP integration.
Automate customer engagement workflows including broadcast triggers, message analytics, segment management, and newsletter tracking through Customer.io via Composio
Automate ElevenLabs text-to-speech workflows -- generate speech from text, browse and inspect voices, check subscription limits, list models, stream audio, and retrieve history via the Composio MCP integration.
Automate Google Search Console tasks via Rube MCP (Composio): query search analytics, list sites, inspect URLs, submit sitemaps, monitor search performance. Always search tools first for current schemas.
Automate Google Search Console tasks via Rube MCP (Composio): search performance, URL inspection, sitemaps, and indexing status. Always search tools first for current schemas.
Automate Hunter.io email intelligence -- search domains for email addresses, find specific contacts, verify email deliverability, manage leads, and monitor account usage -- using natural language through the Composio MCP integration.
Automate New Relic observability workflows -- manage alert policies, notification channels, alert conditions, and monitor applications and browser apps via the Composio MCP integration.
Automate Replicate AI model operations -- run predictions, upload files, inspect model schemas, list versions, and manage prediction history via the Composio MCP integration.
Wave Accounting toolkit is not currently available as a native integration. No Wave-specific tools were found in the Composio platform. This skill is a placeholder pending future integration.
Run evaluations for Hugging Face Hub models using inspect-ai and lighteval on local hardware. Use for backend selection, local GPU evals, and choosing between vLLM / Transformers / accelerate. Not for HF Jobs orchestration, model-card PRs, .eval_results publication, or community-evals automation.
Look up and read Hugging Face paper pages in markdown, and use the papers API for structured metadata such as authors, linked models/datasets/spaces, Github repo and project page. Use when the user shares a Hugging Face paper page URL, an arXiv URL or ID, or asks to summarize, explain, or analyze an AI research paper.
Build a composable CLI for Codex from API docs, an OpenAPI spec, existing curl examples, an SDK, a web app, an admin tool, or a local script. Use when the user wants Codex to create a command-line tool that can run from any repo, expose composable read/write commands, return stable JSON, manage auth, and pair with a companion skill.
Use when the user explicitly asks for a desktop or system screenshot (full screen, specific app or window, or a pixel region), or when tool-specific capture capabilities are unavailable and an OS-level capture is needed.
Use when the user asks to inspect Sentry issues or events, summarize recent production errors, or pull basic Sentry health data via the Sentry CLI; perform read-only queries using the `sentry` command.
Spectroscopy, chromatography, titration, elemental analysis, X-ray crystallography, and mass spectrometry for chemical identification and quantitation. Covers UV-Vis and IR spectroscopy, NMR fundamentals, gas and liquid chromatography, gravimetric and volumetric analysis, diffraction methods, separation techniques, and quantitative error analysis. Use when identifying unknown substances, determining purity, measuring concentrations, or interpreting analytical data.
Use when the user reports a bug they can't reproduce, asks where to start debugging, or mentions a Heisenbug / production-only failure. Drives the observe→hypothesize→predict→test→iterate scientific method.
Structured approaches to decisions under uncertainty and complexity. Covers expected value, decision trees, multi-criteria decision analysis, System 1 vs System 2 allocation, pre-mortems, reversible vs irreversible decisions, and the distinction between good decisions and good outcomes. Use when choosing among alternatives with uncertain or multi-dimensional consequences, especially when the stakes justify a deliberate rather than intuitive process.
Evaluating the quality, provenance, and relevance of evidence that supports or undermines a claim. Covers source credibility, sampling quality, study design, levels of evidence (anecdote to meta-analysis), base rate integration, distinguishing primary from secondary sources, and calibrating belief to evidence strength. Use when the question is not whether an argument is valid but whether its premises are actually supported by the available data.
Bias detection and mitigation, fairness metrics, privacy frameworks, consent models, transparency requirements, and accountability structures for data science practice. Covers algorithmic bias sources, disparate impact testing, differential privacy, GDPR principles, model cards, datasheets for datasets, responsible AI frameworks, and the organizational governance needed to make ethics actionable. Use when auditing models for bias, designing privacy-preserving systems, establishing governance processes, or evaluating the social impact of data-driven decisions.
Regression analysis, ANOVA, generalized linear models, Bayesian methods, and model selection. Covers the full modeling workflow from problem formulation through diagnostics -- linear regression, logistic regression, Poisson regression, mixed-effects models, prior specification, posterior inference, AIC/BIC comparison, cross-validation for model selection, and assumption checking. Use when fitting models, testing hypotheses, or selecting among competing statistical explanations.
Systematic departures from rational choice theory and their implications for economic analysis and policy. Covers cognitive heuristics (anchoring, availability, representativeness), biases (loss aversion, status quo, overconfidence), prospect theory (reference dependence, probability weighting, diminishing sensitivity), nudge theory and choice architecture, and the integration of psychological findings into economic models. Use when analyzing decision-making under uncertainty, evaluating policy interventions that exploit behavioral patterns, or assessing where standard rational-agent models break down.