name: simulation-orchestrator description: > Orchestrate multi-simulation campaigns — generate parameter sweep configurations (grid, linspace, or Latin Hypercube sampling), initialize and track batch job campaigns, monitor job completion status, and aggregate results with summary statistics across all runs. Use when running a parameter study across dt, kappa, or other simulation inputs, managing dozens or hundreds of simulation configurations, combining outputs from completed batch runs to find the best result, or automating the generate-run-collect workflow for systematic studies, even if the user only says "I need to try many parameter combinations" or "how do I organize a sweep." allowed-tools: Read, Write, Grep, Glob metadata: author: HeshamFS version: "1.1.0" security_tier: medium security_reviewed: true tested_with: - claude-code - gemini-cli - vs-code-copilot eval_cases: 2 last_reviewed: "2026-03-26"
Simulation Orchestrator
Goal
Provide tools to manage multi-simulation campaigns: generate parameter sweeps, track job execution status, and aggregate results from completed runs.
Requirements
- Python 3.10+
- No external dependencies (uses Python standard library only)
- Works on Linux, macOS, and Windows
Inputs to Gather
Before running orchestration scripts, collect from the user:
| Input | Description | Example |
|---|---|---|
| Base config | Template simulation configuration | base_config.json |
| Parameter ranges | Parameters to sweep with bounds | dt:[1e-4,1e-2],kappa:[0.1,1.0] |
| Sweep method | How to sample parameter space | grid, lhs, linspace |
| Output directory | Where to store campaign files | ./campaign_001 |
| Simulation command | Command to run each simulation | python sim.py --config {config} |
Decision Guidance
Choosing a Sweep Method
Need every combination (full factorial)?
├── YES → Use grid (warning: exponential growth with parameters)
└── NO → Is space-filling coverage needed?
├── YES → Use lhs (Latin Hypercube Sampling)
└── NO → Use linspace for uniform sampling per parameter
| Method | Best For | Sample Count |
|---|---|---|
grid | Low dimensions (1-3), need exact corners | n^d (exponential) |
linspace | 1D sweeps, uniform spacing | n per parameter |
lhs | High dimensions, space-filling | user-specified budget |
Campaign Size Guidelines
| Parameters | Grid Points Each | Total Runs | Recommendation |
|---|---|---|---|
| 1 | 10 | 10 | Grid is fine |
| 2 | 10 | 100 | Grid acceptable |
| 3 | 10 | 1,000 | Consider LHS |
| 4+ | 10 | 10,000+ | Use LHS or DOE |
Script Outputs (JSON Fields)
| Script | Output Fields |
|---|---|
scripts/sweep_generator.py | configs, parameter_space, sweep_method, total_runs |
scripts/campaign_manager.py | campaign_id, status, jobs, progress |
scripts/job_tracker.py | job_id, status, start_time, end_time, exit_code |
scripts/result_aggregator.py | summary, statistics, best_run, failed_runs |
Workflow
Step 1: Generate Parameter Sweep
Create configurations for all parameter combinations:
python3 scripts/sweep_generator.py \
--base-config base_config.json \
--params "dt:1e-4:1e-2:5,kappa:0.1:1.0:3" \
--method linspace \
--output-dir ./campaign_001 \
--json
Step 2: Initialize Campaign
Create campaign tracking structure:
python3 scripts/campaign_manager.py \
--action init \
--config-dir ./campaign_001 \
--command "python sim.py --config {config}" \
--json
Step 3: Track Job Status
Monitor running jobs:
python3 scripts/job_tracker.py \
--campaign-dir ./campaign_001 \
--update \
--json
Step 4: Aggregate Results
Combine results from completed runs:
python3 scripts/result_aggregator.py \
--campaign-dir ./campaign_001 \
--metric objective_value \
--json
CLI Examples
# Generate 5x3=15 runs varying dt (5 values) and kappa (3 values)
python3 scripts/sweep_generator.py \
--base-config sim.json \
--params "dt:1e-4:1e-2:5,kappa:0.1:1.0:3" \
--method linspace \
--output-dir ./sweep_001 \
--json
# Generate LHS samples for 4 parameters with budget of 20 runs
python3 scripts/sweep_generator.py \
--base-config sim.json \
--params "dt:1e-4:1e-2,kappa:0.1:1.0,M:1e-6:1e-4,W:0.5:2.0" \
--method lhs \
--samples 20 \
--output-dir ./lhs_001 \
--json
# Check campaign status
python3 scripts/campaign_manager.py \
--action status \
--config-dir ./sweep_001 \
--json
# Get summary statistics from completed runs
python3 scripts/result_aggregator.py \
--campaign-dir ./sweep_001 \
--metric final_energy \
--json
Conversational Workflow Example
User: I want to run a parameter sweep on dt and kappa for my phase-field simulation. I want to try 5 values of dt between 1e-4 and 1e-2, and 4 values of kappa between 0.1 and 1.0.
Agent workflow:
- Calculate total runs: 5 x 4 = 20 runs
- Generate sweep configurations:
python3 scripts/sweep_generator.py \ --base-config simulation.json \ --params "dt:1e-4:1e-2:5,kappa:0.1:1.0:4" \ --method linspace \ --output-dir ./dt_kappa_sweep \ --json - Initialize campaign:
python3 scripts/campaign_manager.py \ --action init \ --config-dir ./dt_kappa_sweep \ --command "python phase_field.py --config {config}" \ --json - After user runs simulations, aggregate results:
python3 scripts/result_aggregator.py \ --campaign-dir ./dt_kappa_sweep \ --metric interface_width \ --json
Error Handling
| Error | Cause | Resolution |
|---|---|---|
Base config not found | Invalid file path | Verify base config file exists |
Invalid parameter format | Malformed param string | Use format name:min:max:count or name:min:max |
Output directory exists | Would overwrite | Use --force or choose new directory |
No completed jobs | No results to aggregate | Wait for jobs to complete or check for failures |
Metric not found | Result files missing field | Verify metric name in result JSON |
Integration with Other Skills
The simulation-orchestrator works with other simulation-workflow skills:
parameter-optimization simulation-orchestrator
│ │
│ DOE samples ────────────────>│ Generate configs
│ │
│ │ Run simulations
│ │
│<──────────────────────────── │ Aggregate results
│ │
│ Sensitivity analysis │
│ Optimizer selection │
Typical Combined Workflow
- Use
parameter-optimization/doe_generator.pyto get sample points - Use
simulation-orchestrator/sweep_generator.pyto create configs - Run simulations (user's responsibility)
- Use
simulation-orchestrator/result_aggregator.pyto collect results - Use
parameter-optimization/sensitivity_summary.pyto analyze
Security
Input Validation
- Metric names are validated against
[a-zA-Z_][a-zA-Z0-9_.]*to prevent traversal or injection via crafted keys campaign_manager.pyvalidates command templates to reject shell chaining operators (;,|,&, backticks,$)--paramsformat strings are parsed and validated (name:min:max:countwith finite numeric bounds and positive integer counts)--methodis validated against a fixed allowlist (grid,linspace,lhs)--samplesis validated as a positive integer with an upper bound--actionis validated against a fixed allowlist (init,status)
File Access
sweep_generator.pyreads a single base config file (JSON) specified by--base-configand writes generated configs to--output-dirresult_aggregator.pyenforces a 10 MB file-size limit per result file, maximum JSON nesting depth, and strict numeric type checking (rejectsbool,NaN,Inf)- All string values from result files are sanitized (truncated, control characters stripped) before surfacing them
- Config paths interpolated into shell commands are validated against a safe-character allowlist and escaped with
shlex.quote()
Tool Restrictions
- Read: Used to inspect script source, references, base configs, and campaign status files
- Write: Used to save generated sweep configs, campaign manifests, and aggregated results; writes are scoped to the user's working directory
- Grep/Glob: Used to locate campaign files, result files, and search references
- The skill's
allowed-toolsexcludesBashto prevent the agent from executing arbitrary commands when processing untrusted simulation outputs
Safety Measures
- No
eval(),exec(), or dynamic code generation - All subprocess calls use explicit argument lists (no
shell=True) - Reduced tool surface (no Bash) limits the agent to read/write operations only
- Command templates are validated but never executed by the skill itself; execution is the user's responsibility
Limitations
- Not a job scheduler: Does not submit jobs to SLURM/PBS; generates configs and tracks status
- No parallel execution: User must run simulations externally (can use GNU parallel, SLURM, etc.)
- File-based tracking: Status tracked via files; no database or real-time monitoring
- Local filesystem: Assumes all files accessible from local machine
References
references/campaign_patterns.md- Common campaign structuresreferences/sweep_strategies.md- Parameter sweep design guidancereferences/aggregation_methods.md- Result aggregation techniques
Version History
- v1.0.0 (2024-12-24): Initial release with sweep, campaign, tracking, and aggregation