Triage review findings interactively — approve, skip, or prioritize each issue. Use after /phx:review to filter findings before fixing.
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →Triage review findings interactively — approve, skip, or prioritize each issue. Use after /phx:review to filter findings before fixing.
Turns reviewer comments into structured, professional point-by-point responses linked to manuscript revisions, clarifications, rebuttals, and additional analyses.
Scrum-inspired paper review, revision, and R&R workflow. Handles docx/tex/md/PDF in English or Chinese. Auto-detects manuscript stage, estimates sprint count, runs multi-lens review (Contribution/Rigor/Writing/Editor), generates prioritized revision backlog, exports MD/DOCX/PDF/HTML reports. Use when asked to review a paper, revise based on reviewer comments, handle R&R, respond to peer review, plan paper revision sprints, or when user types /ps or /papersprint.
Structures research progress into focused and actionable slides for lab meetings or project reviews without inventing missing content.
Collects candidate biomedical literature across multiple databases, adapts search logic by database, preserves source metadata, and organizes results into a structured, screening-ready candidate pool. Always use this skill when a user wants cross-database literature collection, search strategy construction, candidate paper aggregation, or first-pass evidence organization before deduplication, screening, layered reading, or review planning. Requires real and verifiable literature records only. Every formal literature item must include a real link and DOI when available; never fabricate citations, titles, authors, years, journals, abstracts, PMIDs, or DOIs. If a DOI is unavailable or cannot be verified, state that explicitly rather than inventing one.
Verifies whether a scientific or biomedical claim is actually supported by the cited original papers rather than by citation drift, overstatement, selective citation, or correlation-to-causation inflation. Use this skill whenever a user wants to check whether a repeated statement, slide claim, manuscript sentence, review assertion, or “people often say” scientific conclusion is truly supported by the underlying primary literature. Always separate the claim itself, the cited paper(s), what the paper actually showed, what it did not show, and whether later retellings drifted beyond the original evidence. Never fabricate references, findings, study features, or citation chains.
Identifies the real underlying study design used in a medical or biomedical paper, distinguishes primary and secondary design components when papers are hybrid, and converts the paper into an evidence-aware design label suitable for literature appraisal, evidence grading, and downstream review workflows. Always identify the actual design from what the study did, not from how the authors describe it. Never fabricate references, metadata, or study features.
Extracts concrete unmet clinical needs from guidelines, reviews, real-world studies, and clinical-practice evidence. Use this skill when a user wants to turn broad medical research value into specific clinical pain points such as weak early detection, poor risk stratification, treatment-response heterogeneity, monitoring gaps, diagnostic delay, undertreatment, overtreatment, or implementation failure. Always ground unmet-need claims in retrieved evidence and distinguish true care gaps from generic statements of importance.
Generates complete FAERS pharmacovigilance study designs for multi-drug or class-level safety comparison inside one predefined SOC or AE family using active comparators, disproportionality analysis, subgroup characterization, and reviewer-facing evidence control.
Plans confounder control, variable adjustment logic, and bias mitigation strategies at the protocol stage for clinical, epidemiologic, translational, observational, and biomarker studies. Always use this skill when a user needs to identify major confounders, decide which variables should or should not be adjusted for, compare matching/stratification/weighting approaches, anticipate selection or measurement bias, or pressure-test a study design before execution. Focus on bias sensing, causal structure awareness, variable-role classification, and critical design review rather than generic statistical advice.
Builds clear, executable, and auditable inclusion and exclusion criteria for biomedical and clinical research protocols. Always use this skill when a user needs to translate a target population into operational screening rules tied to chart fields, time windows, tests, procedures, prior therapies, exclusions, and reviewable edge cases. Focus on protocol-stage precision, ambiguity reduction, auditability, and screening reproducibility rather than generic study design advice.
Generates complete reference-grounded single-drug adverse-effect network-pharmacology research designs from a user-provided drug, adverse event, and desired evidence depth. Always use this skill when a user wants to design, plan, or upgrade a conventional network-pharmacology study centered on one fixed drug and one fixed adverse-effect endpoint, using drug-target prediction, adverse-event target collection, overlap analysis, PPI hub prioritization, enrichment interpretation, molecular docking, and optional orthogonal transcriptomic or literature validation. Covers five study patterns (canonical hub-first, cardiotoxicity or electrophysiology-oriented, immune-inflammatory adverse effect, organ-toxicity pathway context, translational validation) and always outputs four workload configs (Lite / Standard / Advanced / Publication+) with a recommended primary plan, dependency/evidence map, step-by-step workflow, figure plan, validation strategy, minimal executable version, publication upgrade path, verified-reference pack, and self-critical risk review.
Generates complete reference-grounded single-drug adverse-effect network-pharmacology research designs from a user-provided drug, adverse event, and desired evidence depth. Always use this skill when a user wants to design, plan, or upgrade a conventional network-pharmacology study centered on one fixed drug and one fixed adverse-effect endpoint, using drug-target prediction, adverse-event target collection, overlap analysis, PPI hub prioritization, enrichment interpretation, molecular docking, and optional orthogonal transcriptomic or literature validation. Covers five study patterns (canonical hub-first, cardiotoxicity or electrophysiology-oriented, immune-inflammatory adverse effect, organ-toxicity pathway context, translational validation) and always outputs four workload configs (Lite / Standard / Advanced / Publication+) with a recommended primary plan, dependency/evidence map, step-by-step workflow, figure plan, validation strategy, minimal executable version, publication upgrade path, verified-reference pack, and self-critical risk review.
Generates complete tumor immune-infiltration-guided bulk-transcriptome diagnostic biomarker and machine-learning research designs from a user-provided cancer type and study direction. Always use this skill whenever a user wants to design, plan, or build a tumor bioinformatics study centered on differential expression, immune infiltration estimation, immune-linked module discovery, consensus feature selection, diagnostic modeling, nomogram construction, clinical association, and optional prognostic extension or validation. Covers five study patterns (immune-cell-first diagnostic workflow, immune-module-to-biomarker workflow, consensus-ML biomarker workflow, diagnostic-plus-prognostic hybrid workflow, translational validation workflow) and always outputs four workload configs (Lite / Standard / Advanced / Publication+) with recommended primary plan, step-by-step workflow, figure plan, validation strategy, minimal executable version, publication upgrade path, reference literature pack, and self-critical risk review.
Generates submission-ready Elsevier/SCI Highlights from manuscript text or extracted PDF/DOCX/TXT content. Use when a user needs 3-5 concise, evidence-grounded highlight bullets for a research paper, review, meta-analysis, case report, or bioinformatics manuscript.
Generates structured biomedical outlines for review articles, discussion sections, and thesis proposals. Use when a user provides biomedical keywords, results/discussion text, or a proposal title plus background and needs a directly usable academic writing scaffold.
Generates a first draft of a clinical meta-analysis paper. Input the research report (including Methods and Results sections), language, and title to automatically generate a complete paper draft including Abstract, Introduction, Discussion, and other sections, with automatic PubMed retrieval of relevant references. Suitable for assisting in the writing of systematic reviews and meta-analyses.
Conduct professional peer reviews for papers or theses, providing structured evaluations and improvement suggestions; use when you need a pre-submission assessment, an internal review, or academic quality control.
Helps organize reviewer comments and generate a standardized Word (.docx) response letter that maps each change to its exact location (page/paragraph/line). Use when revising a manuscript, replying to peer-review feedback, or preparing internal review responses.
Analyze data with `forest-plot-styler` using a reproducible workflow, explicit validation, and structured outputs for review-ready interpretation.
Screens research papers based on title/abstract and inclusion criteria, providing a structured Yes/No/Maybe decision. Use when you need to filter literature for meta-analysis or systematic reviews.
Clinical research outcome extraction for meta-analysis. Use when users need to extract outcome measures (binary, continuous, or survival data) from clinical research papers for systematic review and meta-analysis. Handles both database lookup by PMID and real-time LLM extraction.
Automates critical appraisal and quality assessment for research papers by analyzing text against established methodological standards (such as risk of bias tools, quality checklists, or reporting guidelines) and synthesizing a structured evaluation report. Use when you need to assess the methodological quality, internal validity, or reporting completeness of any type of study—including RCTs, observational studies, systematic reviews, qualitative research, or diagnostic accuracy studies.
Analyze data with `volcano-plot-labeler` using a reproducible workflow, explicit validation, and structured outputs for review-ready interpretation.
Build and visualize a citation network from a source/target CSV to identify key papers, communities, and emerging hotspots; use when you have citation pairs and need fast literature review or trend analysis.
Retrieve the latest journal issue's table of contents and abstracts from URL/DOI/PMID/RSS/TOC sources, then generate Chinese key points locally (no external translation APIs) when a new issue needs quick review and archiving.
Filter literature by publication year, journal, and predefined screening rules to produce inclusion/exclusion lists; use when conducting preliminary screening or systematic review screening to narrow the literature scope.
Multi-database literature search and search-strategy design that outputs structured, reproducible result lists; use when you need reference retrieval, systematic searching, review topic selection, or to construct a traceable search strategy.
Direct REST API access to UniProt for protein search, entry retrieval, and identifier mapping; use when you need programmatic UniProtKB queries or cross-database ID conversion.
Create, edit, and extract content from PowerPoint (.pptx) files; use when you need to generate slides programmatically, update existing decks, or export slide previews.
Detects content similarity, verifies standardized citations and abbreviations, and flags potential academic integrity risks; use it before submission, during academic writing QA, or for compliance reviews.
Principles and checklists for designing and reviewing REST and GraphQL APIs; use when defining or evaluating API contracts (endpoints/schemas), naming, error models, pagination, versioning, and REST vs. GraphQL trade-offs.
Classifies and organizes literature by theme, method, and conclusion; use when you need to batch-read a folder of PDF/MD/DOCX/TXT files and output a structured CSV for literature reviews and annotation management.
Check for co-authorship and institutional conflicts between authors and suggested reviewers to support peer review integrity. Coauthorship and institutional conflict detection supported.
Organize, back up, compress, split, and merge files/folders using rule-driven plans; use when you need safe previews, conflict handling, and verification before executing file operations.
> **Source**: [https://github.com/aipoch/medical-research-skills](https://github.com/aipoch/medical-research-skills)
Learning tutoring planning and content production skill for creating study plans, generating exercises, writing answer explanations, and providing review/adjustment guidance; triggered by requests like “study plan”, “exercise set/question bank”, “answer analysis”, “error analysis”, “exam prep plan”, or “spaced/periodic review schedule”.
Medical literature search strategy generator. Given a user's natural-language description (e.g., meta-analysis topic, PICOS elements, research question), automatically extract medical entities (disease, intervention, population, outcomes) and generate professional search queries for seven major databases (PubMed, Cochrane, Embase, Web of Science, CNKI, Wanfang, VIP). Useful for developing search strategies for systematic reviews and meta-analyses.
Organize requirements into Markdown/JSON mind map structures; use when you need a hierarchical outline for mind map tools (e.g., XMind, ProcessOn, FreeMind) or for documentation planning.
Extract PDF selectable text and full-page or segmented page images (including tables) into Markdown with per-page headings and image links; use when you need both readable text and page visuals for PPT creation, review, or analysis.
Perform basic local PDF operations (merge, split, extract pages/text/tables, create) when users request offline PDF processing without external services.
Automatically generates a Markdown final-exam review plan or lab experiment schedule when you provide a date range, tasks/items, and available daily hours (via interactive prompts or a one-time JSON input).
Create and export PPTX decks using the local HTML/JS PPT framework in `D:\SKILL\project\ppt`. Use this when you need to generate slides from a topic/outline, edit slide content via `projects/*.js`, preview as HTML, or export a `.pptx` without relying on an existing template.
Check whether a paper’s Methods section contains all information needed for replication; use when preparing a manuscript for submission or reviewing methodological completeness.
Simulates a strict SCI peer-review workflow; trigger when a user uploads or pastes a manuscript (PDF/DOC/DOCX/TXT) and requests an innovation score (1–12) plus experimental-logic vulnerability checks and revision suggestions.
Generates a PROSPERO-compliant Meta-analysis protocol based on Title and PICOS. Use when the user wants to write a protocol for a systematic review or meta-analysis.
Write competitive research proposals for NSF, NIH, DOE, DARPA, and Taiwan's NSTC when you need agency-compliant narratives, budgets, and review-criteria alignment for a specific solicitation/FOA/BAA.
Fetches comments and reviews from the current GitHub Pull Request and formats them as Markdown.
This skill should be used only when the user explicitly asks to use `$ralph-specum-design`, or explicitly asks Ralph Specum in Codex to run the design phase.
This skill should be used only when the user explicitly asks to use `$ralph-specum-refactor`, or explicitly asks Ralph Specum in Codex to revise spec artifacts after implementation learnings.