Optimizes manuscript titles and abstracts for information density, factual accuracy, and submission fit in biomedical research writing.
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →Optimizes manuscript titles and abstracts for information density, factual accuracy, and submission fit in biomedical research writing.
> **Source**: [https://github.com/aipoch/medical-research-skills](https://github.com/aipoch/medical-research-skills)
Clarifies a vague clinical or biomedical research idea into a structured, bounded, searchable, researchable, and testable question. Always use this skill whenever a user has an early-stage clinical or research thought, an over-broad topic, an ill-defined evidence question, or an unclear problem statement that must be translated into a question framing suitable for literature retrieval, evidence synthesis, gap analysis, study design, or downstream protocol planning. Never jump straight to answering the substantive medical question unless the user explicitly asks for that. Focus first on question framing, boundary setting, and downstream-ready formulation.
Quickly judges whether a biomedical paper is worth deep reading by screening for question fit, design quality, sample adequacy, methodological novelty, and reproducibility value.
Identifies real, evidence-audited, topic-specific research gaps in medical research by first retrieving and verifying literature from trusted sources, then mapping the current evidence landscape, rejecting pseudo-gaps, and converting only medium/high-confidence gaps into study-ready research opportunities. Always require real literature retrieval before formal gap claims. Never fabricate references, metadata, or findings.
Detects overlooked, underrepresented, weakly resolved, or poorly validated populations and subgroups within a biomedical research area so users can identify more precise and meaningful study populations. Always use this skill when the real question is not just what is under-studied, but which populations, strata, or subgroups are missing, thinly represented, superficially analyzed, pooled without resolution, or insufficiently validated in the current evidence base. Focus on meaningful subgroup gaps rather than generic calls for diversity.
Designs primary aims, secondary aims, and testable hypotheses from broad biomedical research ideas. Use this skill when a user needs to convert a loose study idea into a tighter protocol-framing structure with clear aim hierarchy, hypothesis discipline, and separation between hypothesis-driven and exploratory components. Always keep aims answerable, non-overlapping, and aligned to the intended evidence type and study scope.
Designs cell-based and animal-based validation plans that translate computational, omics, biomarker, genetic, or clinical findings into experimentally testable validation routes. Always use this skill whenever a user wants to move from an in silico, statistical, or clinical association finding toward wet-lab validation using cell systems, organoid-like systems, xenograft or genetically relevant animal models. It should define the exact claim to test, separate mechanism-testing from association-support and translational-support goals, choose the best-fit model family, specify perturbation strategy, readouts, controls, sequencing of experiments, and four workload configurations (Lite / Standard / Advanced / Publication+) with one recommended primary plan. Never fabricate model availability, reagent availability, species relevance, assay feasibility, phenotype penetrance, expected effect sizes, validation success, or literature references.
Generates complete bidirectional multi-phenotype Mendelian randomization research designs from a user-provided exposure family and outcome family. Always use this skill whenever a user wants to design, plan, or build a genome-wide causal-inference study based on publicly available GWAS summary statistics, especially when the article logic includes multiple exposures, multiple outcomes or subtypes, bidirectional MR, IV filtering, IVW as the main estimator, weighted median / MR-Egger / MR-PRESSO sensitivity analyses, leave-one-out testing, heterogeneity / pleiotropy checks, and multiple-testing control with FDR. Covers five study patterns (single-family bidirectional MR, multi-phenotype screening MR, subtype-resolved MR, phenome-style bidirectional causal map, mechanism-prioritized MR follow-up) and always outputs four workload configs (Lite / Standard / Advanced / Publication+) with recommended primary plan, step-by-step workflow, figure plan, validation strategy, minimal executable version...
Designs complete integrated research plans for bulk transcriptomics, proteomics, metabolomics, and related omics from a user-provided biomedical direction. Always use this skill whenever a user wants to design, scope, or structure a bulk multi-omics or single-omics-plus-clinical study — including disease-focused, mechanism-focused, biomarker-focused, stratification-oriented, or translational projects. It should define the research question, choose the best-fit study pattern, recommend example datasets as reference candidates only, specify the core analysis modules and method choices, propose a validation ladder, and output four workload configurations (Lite / Standard / Advanced / Publication+). Never fabricate datasets, accession numbers, sample counts, metadata completeness, cohort availability, assay coverage, literature references, PMIDs, DOIs, or validation status. Always include the mandatory Dataset Disclaimer immediately before any workflow section that mentions datasets or public resources.
Design a structured case-control study framework with explicit source population logic, control selection rules, matching decisions, exposure measurement planning, and bias-control checkpoints.
Designs retrospective or prospective clinical cohort study protocols for biomedical and clinical research. Always use this skill when the user needs a cohort-based study plan rather than a general study idea, evidence summary, or mechanistic experiment design. Focus on cohort appropriateness, enrollment logic, baseline time-zero definition, follow-up structure, endpoint definition, variable collection, confounding control, and a coherent primary statistical analysis line. Do not invent data availability, follow-up completeness, outcome ascertainment quality, sample size adequacy, or causal interpretability.
Generates complete comparative network-toxicology research designs from a user-provided exposure pair, shared toxic phenotype, and validation direction. Use when a study centers on two related exposures under one outcome and needs target collection, shared-vs-specific target decomposition, enrichment, PPI hub prioritization, docking, optional transcriptomic cross-checks, and conservative mechanistic synthesis. Covers five study patterns and always outputs Lite / Standard / Advanced / Publication+ with a recommended primary plan, stepwise workflow, figure plan, validation hierarchy, minimal executable version, publication upgrade path, and strictly verified literature retrieval.
Design evidence-discovery and validation workflows for drug repurposing studies by integrating disease mechanisms, drug-target logic, expression reversal, real-world evidence, and validation routes into a closed-loop study blueprint.
Designs primary, secondary, and exploratory endpoints for biomedical and clinical research protocols. Always use this skill when a user needs to translate study aims into operational endpoint definitions with event rules, assessment timing, composite logic, interpretability, and protocol-stage auditability. Focus on endpoint precision, feasibility, clinical meaning, ambiguity reduction, and implementation readiness rather than generic study design advice.
Generates complete FAERS-style pharmacovigilance disproportionality research designs from a user-provided drug class, comparator strategy, adverse-event domain, and patient-group stratification. Always use this skill whenever a user wants to design, plan, or build a spontaneous-report safety signal study using FAERS or a similar pharmacovigilance database, especially when the article logic includes product selection, indication-group stratification, MedDRA-based adverse-event extraction, serious-case filtering, suspect-drug and concomitant-exclusion logic, reporting odds ratio analysis, comparator-drug benchmarking, cross-drug comparison, and cautious signal interpretation without causal overclaiming. Covers five study patterns (single-drug disproportionality workflow, multi-drug class comparison workflow, indication-stratified workflow, comparator-controlled signal screening workflow...
Designs a realistic, execution-aware biomedical study version under explicit constraints of samples, time, budget, data access, lab capacity, team skill, and validation resources. Always use this skill when the user has a real study idea, a candidate route, or a partially framed project but cannot assume ideal conditions. If critical feasibility inputs are missing, first clarify what resources are currently available, what resources may be obtainable, and what resources are realistically unavailable. Do not invent access, capabilities, collaborations, or validation resources. Focus first on feasibility-constrained study framing, route narrowing, dependency control, and minimum viable study design.
Extends a mechanistic or association-level biomedical finding into a staged validation pathway that moves from descriptive evidence toward stronger functional support, mechanistic specificity, and clinical relevance. Use this skill when a user has a pathway, biomarker, cell-state, target, mechanism, or association finding and needs to decide what should be validated next, in what order, and which evidence layers are necessary versus optional. Do not default to maximal validation stacks. Build a structured validation ladder with a primary route, stronger upgrade route, and optional extensions.
Converts an audited medical research gap into a complete, structured, gap-traceable study design. Always use this skill whenever a user already has one or more candidate research gaps and wants to transform them into an executable biomedical research plan rather than re-run broad topic ideation. Covers six gap-to-design patterns (evidence-completion, mechanism-resolution, cell-state/context-mapping, translation-bridge, causality-upgrade, population/stage-specific) and always outputs one recommended primary protocol, a gap-to-design dependency map, step-by-step workflow, figure plan, validation strategy, minimal executable version, publication upgrade path, and verified design-support literature rules. Never fabricate references. Preserve claim-evidence discipline and do not replace a topic-specific gap with a generic workflow.
Designs complete research plans that integrate clinical variables with multi-omics data from a user-provided biomedical direction. Always use this skill whenever a user wants to design, scope, or structure a study that combines clinical variables with transcriptomics, proteomics, metabolomics, epigenomics, or related omics layers for mechanism interpretation, biomarker development, risk stratification, treatment-response analysis, or translational use. It should define the clinical use case, alignment across data layers, feature-reduction and fusion logic, modeling route, mechanism-interpretation layer, validation ladder, and four workload configurations (Lite / Standard / Advanced / Publication+). Never fabricate datasets, accession numbers, sample counts, metadata completeness, platform coverage, literature references, PMIDs, DOIs, or validation status. Always include the mandatory Dataset Disclaimer immediately before any workflow section that mentions datasets or public resources.
Generates complete NHANES-style cross-sectional epidemiology + retrospective clinical validation research designs from a user-provided disease and biomarker direction. Always use this skill whenever a user wants to design, plan, or build a population-level biomarker association study using NHANES or similar survey datasets, especially when the article logic includes disease definition, biomarker formula derivation, multivariable logistic regression, restricted cubic spline analysis, subgroup stability testing, and a secondary hospital-based retrospective validation cohort. Covers five study patterns (cross-sectional association, dose-response / RCS, subgroup-stability, NHANES + retrospective validation, preliminary screening-performance) and always outputs four workload configs (Lite / Standard / Advanced / Publication+) with recommended primary plan, step-by-step workflow, figure plan, validation strategy, minimal executable version, publication upgrade path...
Compares multiple study-route options for the same biomedical research question and recommends one primary plan, while explicitly explaining why alternative routes are secondary, premature, weaker, or dependency-heavy. Always use this skill when the user already has a reasonably defined question but is unsure which main study route should anchor the project. Focus on plan comparison, route selection, dependency awareness, and primary-plan justification rather than full protocol drafting.
Designs discovery, modeling, and validation workflows for prognostic biomarkers in biomedical and clinical research. Always use this skill when the user needs a prognostic biomarker study blueprint rather than a diagnostic test protocol, predictive biomarker design, treatment recommendation, or a completed manuscript. Focus on endpoint family, follow-up horizon, time scale, candidate marker strategy, model-building logic, risk stratification framework, and internal/external validation requirements. Do not invent cohort size, event rate, assay readiness, literature support, or validation access.
Plans sample size estimation logic, power assumptions, feasibility checks, and fallback enrollment strategies for clinical and translational study protocols.
Designs complete single-cell research plans from a user-provided biomedical direction. Always use this skill whenever a user wants to design, scope, or structure a single-cell study — including disease-focused, mechanism-focused, biomarker-focused, translational, perturbation-inspired, or validation-aware projects. It should define the research question, choose the best-fit study pattern, recommend sample grouping logic, suggest reference datasets as examples only, specify the core analysis modules, propose a validation ladder, and output four workload configurations (Lite / Standard / Advanced / Publication+). Never fabricate datasets, sample metadata, accession numbers, cohort availability, cell-type labels, external validation resources, or literature references. Always include the mandatory Dataset Disclaimer immediately before any workflow section that mentions datasets or public resources.
Designs studies for predicting treatment response or resistance in biomedical and clinical research. Always use this skill when the user needs a treatment-response or resistance prediction study blueprint rather than a prognostic biomarker protocol, diagnostic test design, causal treatment-effect estimation, or a completed manuscript. Focus on responder definition, treatment context, baseline comparability, feature integration strategy, model development logic, validation architecture, and interpretation boundaries. Do not invent response rates, cohort size, assay readiness, regimen uniformity, literature support, or validation access.
Creates academic-poster writing packages for LaTeX using beamerposter, tikzposter, or baposter. Use when a user needs poster-ready section copy, figure plans, captions, and package-specific layout decisions for conference or thesis posters.
Data structure for annotated matrices in single-cell analysis; use when reading/writing .h5ad (or zarr) and exchanging data with the scverse ecosystem.
Unified Python access to 40+ bioinformatics web services; use when you need to query multiple databases (e.g., UniProt/KEGG/ChEMBL/Reactome) with one consistent API in a single workflow, especially for cross-database analysis and identifier mapping.
ETE (Environment for Tree Exploration) toolkit for phylogenetic and hierarchical tree analysis; use it when you need to parse/manipulate Newick/NHX trees, detect duplication/speciation events, integrate NCBI taxonomy, and render publication-quality figures.
Statistical analysis and reporting for experimental datasets; use when you need to interpret experimental results, test significance (t-tests/ANOVA), or generate reproducible reports.
Parse Flow Cytometry Standard (FCS) files v2.0–3.1 and extract events/metadata for preprocessing workflows (e.g., when you need NumPy arrays, channel info, or CSV/DataFrame export from cytometry files).
A high-performance Rust toolkit (with Python bindings and a CLI) for genomic interval analysis; use it when you need fast overlap queries, coverage track generation, genomic tokenization for ML, reference sequence verification, or fragment processing.
Process, clean, and compare mass spectrometry (MS/MS) spectra with Matchms; use when you need reproducible spectral filtering and similarity scoring for metabolomics workflows.
A low-level plotting library for comprehensive customization. Use when fine-grained control over every plot element is needed, creating new types of charts, or integrating into specific scientific workflows. Can export to PNG/PDF/SVG for publication. For quick statistical charts, use seaborn; for interactive charts, use plotly; for journal-style, publication-ready multi-panel charts, use scientific-visualization.
Generate Meta-analysis funnel plots and perform publication bias testing. Takes CSV file with Meta-analysis data as input, outputs funnel plot PNG, Egger test and Begg test results.
Generates Meta-Analysis research titles based on user keywords, utilizing PubMed search results if available, or creative generation otherwise. Use when the user wants to brainstorm or generate titles for a meta-analysis, specifically starting from keywords or a topic.
Predict neoantigens that may be recognized by the immune system based.
Differential gene expression analysis for bulk RNA-seq count matrices using a DESeq2-like workflow in Python; use when you need Wald tests, FDR correction, and optional LFC shrinkage for condition/batch/covariate designs.
Comprehensive tool for computational mass spectrometry using PyOpenMS; use when you need to read/write MS formats (mzML/mzXML/MGF), run signal processing (smoothing/peak picking), detect isotope features, or perform peptide identification in proteomics/metabolomics workflows.
Genomic file toolkit. For reading/writing SAM/BAM/CRAM alignment files, VCF/BCF variant files, FASTA/FASTQ sequences, extracting regions, calculating coverage, suitable for NGS data processing pipelines.
Automated bias assessment for diagnostic accuracy studies using QUADAS-C criteria. Requires full text input.
Automates Risk of Bias 2 (ROB2) assessment for RCT papers by analyzing text against specific domains and synthesizing a report. Use when you need to assess the quality of a clinical trial paper or evaluate risk of bias.
Guided statistical analysis for test selection, assumption checks, power analysis, and APA-style reporting. Use when you need to choose an appropriate statistical test for your data and produce publication-ready results (including effect sizes and diagnostics).
Determines the appropriate Risk of Bias assessment scale for a medical study based on its design (RCT, Cohort, etc.), using PubMed metadata lookup or text analysis. Use when the user wants to know which quality assessment tool to use for a specific paper (given PMID or abstract).
Kaplan-Meier survival analysis tool for clinical and biological research. Generates publication-ready survival curves with statistical tests.
Intelligent medical abbreviation disambiguation tool that resolves ambiguous acronyms using clinical context, specialty-specific knowledge, and document-level semantic analysis.
Search, retrieve metadata, and download PDFs for bioRxiv preprints; use when you need to discover biology preprints by keywords/authors/date ranges and programmatically fetch their details.
Programmatic access to the BRENDA enzyme database via the SOAP API; use when you need kinetic constants (Km, kcat, Vmax), reaction equations, enzyme properties (pH/temperature optima, stability), or enzyme discovery by EC/substrate/product.
Access ChEA3 and Harmonizome ChEA data for transcription factor enrichment analysis and metadata retrieval. Use when the user needs to perform ChEA3 enrichment analysis on a gene set, get metadata about the ChEA dataset, or retrieve information about a specific transcription factor (attribute).