name: executing-plans description: | Use when executing an implementation plan step by step. Triggers: "execute plan", "implement this plan", "start executing", "run the plan". Requires: a docs/superomni/plans/plan-*.md or similar plan document to exist. allowed-tools: [Bash, Read, Write, Edit, Grep, Glob]
{{PREAMBLE}}
Executing Plans
Goal: Execute a written implementation plan precisely, with verification at each stage — running independent steps in parallel to minimize elapsed time.
Iron Laws
1. Dependencies First, Then Parallelize Never execute a step before its dependencies are complete. But DO run all independent steps in parallel within a wave — never serialize work that can be parallelized.
2. Evaluate Before Advancing Every wave must pass an evaluation gate before the next wave begins. A wave is not "done" until its outputs are verified — not just executed.
3. Failures Are Harness Signals When a step fails on 3 consecutive attempts using different approaches, stop executing and treat the failure as a harness signal: update the plan, skill, or constraint — then retry. Never brute-force through 3 failed approaches.
Phase 1: Load the Plan
# Find the plan document
ls docs/superomni/plans/plan-*.md 2>/dev/null | sort | tail -1
Read the plan. Confirm:
- Plan document exists and is readable
- Prerequisites are met
- You understand what "done" looks like for each step
Phase 2: Dependency Analysis — Build the Execution Wave Plan
Before executing any step, analyze ALL steps for dependencies:
DEPENDENCY ANALYSIS
─────────────────────────────────
Step 1: [name] — depends on: none
Step 2: [name] — depends on: none
Step 3: [name] — depends on: Step 1
Step 4: [name] — depends on: none
Step 5: [name] — depends on: Step 2, Step 3
Step 6: [name] — depends on: none
...
WAVE EXECUTION PLAN
Wave 1 (parallel): Steps 1, 2, 4, 6 ← 4 agents dispatched simultaneously
Wave 2 (parallel): Steps 3, 5 ← unblocked after Wave 1 completes
Wave 3 (if needed): ...
Est. time: [N waves] instead of [N sequential steps]
Rule: A step is independent if its outputs are not required by any step in the same wave. Rule: Aim for 5–10 steps per wave when sufficient independent steps exist — never artificially group dependent steps to meet this target.
Phase 3: Execute Wave by Wave
For each wave, dispatch all steps in the wave simultaneously, then wait for all to complete before starting the next wave.
Step Execution Protocol
For each step in a wave:
EXECUTING WAVE [N] — [M] STEPS IN PARALLEL
─────────────────────────────────
Steps: [list of step names/numbers]
For each individual step:
Step [N] — [Step Name]
─────────────────────────────────
What: [Description from plan]
Files: [Files to touch]
Involves code changes? [YES / NO]
- Read — understand what the step requires
- TDD Check — if this step involves writing or modifying source code: dispatch the
test-writeragent (RED phase) with the step description, files to be modified, and expected behavior. The agent writes the failing test suite and returns a TEST REPORT block. Confirm the tests fail before writing any implementation. Then implement the minimum code to make tests pass (GREEN), and refactor as needed. - Frontend Check — if this step involves UI files (
.html,.jsx,.tsx,.vue,.svelte,.css,.scss): apply thefrontend-designskill Phase 4 (Implementation) with the plan's design direction. After completing all UI steps in a wave, run the designer agent quality gate (Phase 5). - Do — make the minimum change needed for this step only
- Verify — run the step's verification criterion
- Report — confirm step complete or blocked
TDD Integration for Code Steps
Every step that creates or modifies source code must follow this flow:
Step involves code? ─── NO ──→ Execute directly
│
YES
↓
Dispatch `test-writer` agent (RED) → failing tests written → confirm they fail
↓
Write minimum implementation (GREEN) → confirm test passes
↓
Refactor if needed → confirm tests still pass
↓
Continue to step verification
If no test framework exists for this project: document what the tests would look like and why they cannot be automated. This is a DONE_WITH_CONCERNS, not a skip.
Frontend-Design Integration for UI Steps
Step involves UI files? ─── NO ──→ Skip
│
YES
↓
Load design direction from plan (## Design Direction section)
↓
Apply frontend-design Phase 4 (Implementation) rules
↓
After all UI steps in wave complete:
Run designer agent quality gate (7+/10 on all dimensions)
↓
Gate PASS → continue | Gate FAIL → fix and re-run (2 retries)
If no design direction exists in the plan: run frontend-design Phase 1-2 (Context Gathering + Design Direction) before implementing. This is a one-time cost per session.
Step Completion Format
✓ Step N COMPLETE
Changed: [files modified]
Evidence: [test output or verification proof]
Step Blocked Format
✗ Step N BLOCKED
Blocker: [what prevents completion]
Tried: [what was attempted]
Options:
A) [approach 1]
B) [approach 2]
C) Skip this step (explain consequences)
D) Other — describe your own approach: ___________
Phase 4: Wave Evaluation Gate
Before advancing to the next wave, run the evaluation gate:
WAVE [N] EVALUATION GATE
─────────────────────────────────
Steps completed: [list]
Tests passing: [run: npm test or equivalent]
Regressions: [any pre-existing tests broken?]
Output contract: [do outputs match what dependent steps expect?]
Gate result: PASS → proceed to Wave N+1 | FAIL → address before advancing
If the gate FAILS:
- Identify which step produced the failing output
- Determine if this is a harness signal (update plan/skill) or an implementation error (fix and re-run)
- Do NOT advance to the next wave until the gate passes
Spawn the evaluator agent when any of these conditions apply:
- The wave contains ≥ 5 steps
- Any step in the wave reported
DONE_WITH_CONCERNS - This is the final wave of the plan
Dispatch with: the wave's acceptance criteria, all step completion blocks, and test output. The agent returns an EVALUATION REPORT with one of four verdicts: APPROVED / APPROVED_WITH_NOTES / CHANGES_REQUIRED / EVALUATION_INCOMPLETE. Do NOT advance to the next wave if the verdict is CHANGES_REQUIRED — return to the failing step(s) with the evaluator's specific findings.
Phase 5: Mid-Plan Check-ins
After every wave completes, or when scope is expanding:
- Report progress: "Completed N/M steps (Wave X of Y done)"
- Flag if actual work diverges from plan
- Surface any blast radius discovered mid-execution
- Ask before proceeding if scope has changed
Phase 6: Handling Plan Deviations
If you discover the plan is wrong or incomplete:
- Stop — do not improvise silently
- Assess — is this a small mechanical fix or a fundamental issue?
- Small fix (mechanical, <5 min): note it, fix it, continue
- Large issue (taste or architectural): surface to user, wait for input
PLAN DEVIATION DETECTED
Step N: [Original plan says X, but actually Y]
Impact: [Low/Medium/High]
Recommendation: [Proposed resolution]
Awaiting: [Your decision before continuing]
Phase 7: Completion
When all steps are done:
PLAN EXECUTION COMPLETE
════════════════════════════════════════
Steps completed: N/N
Waves executed: W
Deviations noted: N
Files changed: [list]
Tests passing: [output]
Status: DONE | DONE_WITH_CONCERNS
Concerns (if any):
- [concern 1]
════════════════════════════════════════
Save Execution Results Document
After completing execution, save the results as a Markdown document:
_EXEC_DATE=$(date +%Y%m%d)
_EXEC_BRANCH=$(git branch --show-current 2>/dev/null | tr '/' '-' || echo "unknown")
_PLAN_FILE=$(ls docs/superomni/plans/plan-*.md 2>/dev/null | sort | tail -1)
if [ -n "$_PLAN_FILE" ]; then
_PLAN_BASE=$(basename "$_PLAN_FILE" .md)
_EXEC_SESSION=$(echo "$_PLAN_BASE" | sed -E "s/^plan-${_EXEC_BRANCH}-//" | sed -E 's/-[0-9]{8}$//')
fi
if [ -z "$_EXEC_SESSION" ]; then
_EXEC_SESSION="execution-run"
fi
_EXEC_FILE="execution-${_EXEC_BRANCH}-${_EXEC_SESSION}-${_EXEC_DATE}.md"
mkdir -p docs/superomni/executions
cat > "docs/superomni/executions/${_EXEC_FILE}" << EOF
# Execution Results: ${_EXEC_BRANCH}
**Date:** ${_EXEC_DATE}
**Branch:** ${_EXEC_BRANCH}
[Paste the full PLAN EXECUTION COMPLETE block here]
## Wave Log
[Paste wave-by-wave summary: steps in each wave, outcomes]
## Steps Log
[Paste all step completion/blocked entries here]
EOF
echo "Execution results saved to docs/superomni/executions/${_EXEC_FILE}"
Write the full execution log (wave plan, all step outcomes + the final PLAN EXECUTION COMPLETE block, formatted as Markdown) to docs/superomni/executions/execution-[branch]-[session]-[date].md. This file serves as the permanent record of the execution run for the user to revisit.
Then trigger the complete VERIFY sequence in order:
code-reviewskill (giving mode) — structured code review of all changesqaskill — test gap filling, edge case explorationverificationskill — evidence-based acceptance criteria check
Do NOT skip any step. Each skill must report DONE before the next is triggered. If any step reports BLOCKED or DONE_WITH_CONCERNS, stop and surface to user before continuing.