Chunking, embeddings, and RAG pipeline integration
Skills(SKILL.md)は、AIエージェント(Claude Code、Cursor、Codexなど)に特定の能力を追加するための設定ファイルです。
詳しく見る →Chunking, embeddings, and RAG pipeline integration
Document extraction pipeline architecture and patterns
Format-specific document extraction workflows
Plugin architecture, registration, and trait patterns
>-
Lookup paper: $ARGUMENTS
Autonomously improve the paper at: **$ARGUMENTS**
Verify every `\cite{...}` in a paper against three independent layers:
Write detailed embodiments for: **$ARGUMENTS**
Search query: $ARGUMENTS
Compile the patent application into filing-ready format based on: **$ARGUMENTS**
Bridge a local paper directory with an Overleaf project so that:
Draft a LaTeX paper based on: **$ARGUMENTS**
Draft a complete patent application based on: **$ARGUMENTS**
Get a multi-round patent examiner review of the patent application based on: **$ARGUMENTS**
Search patents and literature for prior art relevant to: **$ARGUMENTS**
Systematically verify a mathematical proof via cross-model adversarial review, fix identified gaps, re-review until convergence, and generate a detailed audit report with proof-obligation accounting.
A kubectl/docker-style CLI for managing GPU compute jobs on the Qizhi (启智) platform.
Prepare and maintain a grounded, venue-compliant rebuttal for: **$ARGUMENTS**
Get a multi-round critical review of research work from an external LLM with maximum reasoning depth.
Use when experiments complete to judge what claims the results support, what they don't, and what evidence is still missing. Codex MCP evaluates results against intended claims and routes to next action (pivot, supplement, or confirm). Use after experiments finish — before writing the paper or running ablations.
Deploy and run ML experiment: $ARGUMENTS
Task: $ARGUMENTS
> Override for Codex users who want **Claude Code**, not a second Codex agent, to act as the reviewer. Install this package **after** `skills/skills-codex/*`.
> Override for Codex users who want **Gemini**, not a second Codex agent, to act as the reviewer. Install this package **after** `skills/skills-codex/*`.
> Override for Codex users who want **Gemini**, not a second Codex agent, to act as the reviewer. Install this package **after** `skills/skills-codex/*`.
> Override for Codex users who want **Gemini**, not a second Codex agent, to act as the reviewer. Install this package **after** `skills/skills-codex/*`.
> Override for Codex users who want **Gemini**, not a second Codex agent, to act as the reviewer. Install this package **after** `skills/skills-codex/*`.
> Override for Codex users who want **Gemini**, not a second Codex agent, to act as the reviewer. Install this package **after** `skills/skills-codex/*`.
> Override for Codex users who want **Gemini**, not a second Codex/Codex-MCP reviewer, to act as the reviewer. Install this package **after** `skills/skills-codex/*`.
> Override for Codex users who want **Gemini**, not a Codex-MCP reviewer, to act as the reviewer. Install this package **after** `skills/skills-codex/*`.
> Override for Codex users who want **Gemini**, not a second Codex agent, to act as the reviewer. Install this package **after** `skills/skills-codex/*`.
> Override for Codex users who want **Gemini**, not a second Codex agent, to act as the reviewer. Install this package **after** `skills/skills-codex/*`.
> Override for Codex users who want **Gemini**, not a second Codex agent, to act as the reviewer. Install this package **after** `skills/skills-codex/*`.
Autonomously iterate: review → implement fixes → re-review, until the external reviewer gives a positive assessment or MAX_ROUNDS is reached.
Autonomously iterate: review → implement fixes → re-review, until the external reviewer gives a positive assessment or MAX_ROUNDS is reached.
Autonomously iterate: review → implement fixes → re-review, until the external reviewer gives a positive assessment or MAX_ROUNDS is reached.
Draft a grant proposal based on: **$ARGUMENTS**
Generate publishable research ideas for: $ARGUMENTS
Orchestrate a complete idea discovery workflow for: **$ARGUMENTS**
Generate a structured, section-by-section paper outline from: **$ARGUMENTS**
Orchestrate a complete paper writing workflow for: **$ARGUMENTS**
Research topic: $ARGUMENTS
End-to-end autonomous research workflow for: **$ARGUMENTS**
Refine and concretize: **$ARGUMENTS**
Refine and concretize: **$ARGUMENTS**
Write the patent specification based on: **$ARGUMENTS**
Commit staged/unstaged changes and push to the remote branch in one step
State-space model with O(n) complexity vs Transformers' O(n²). 5× faster inference, million-token sequences, no KV cache. Selective SSM with hardware-aware design. Mamba-1 (d_state=16) and Mamba-2 (d_state=128, multi-head). Models 130M-2.8B on HuggingFace.
RNN+Transformer hybrid with O(n) inference. Linear time, infinite context, no KV cache. Train like GPT (parallel), infer like RNN (sequential). Linux Foundation AI project. Production at Windows, Office, NeMo. RWKV-7 (March 2025). Models up to 14B parameters.