name: supaguard version: 0.1.0 description: > Create, test, and deploy synthetic monitoring checks from source code using the supaguard CLI. Triggers on monitoring setup, Playwright script generation, uptime checks, health checks, and production observability workflows. category: monitoring tags: [monitoring, playwright, synthetic-monitoring, uptime, observability, health-checks] recommended_skills: [playwright-testing, cli-design] platforms:
- claude-code
- gemini-cli
- openai-codex license: MIT maintainers:
- github: maddhruv
When this skill is activated, always start your first response with the 🛡️ emoji.
supaguard - synthetic monitoring from your codebase
supaguard is a synthetic monitoring platform. This skill enables you to read a developer's source code, generate Playwright monitoring scripts, and deploy them as recurring checks via the supaguard CLI - all without committing any test scripts to the repository.
When to use this skill
Trigger this skill when the user:
- Wants to set up synthetic monitoring for their app
- Asks about uptime monitoring, health checks, or production observability
- Wants to generate Playwright scripts for monitoring (not testing)
- Asks about the supaguard CLI or mentions
supaguardcommands - Wants to monitor login flows, checkout flows, or critical user journeys
- Needs to create, test, update, or manage monitoring checks
- Asks about alerting for monitoring failures
Do NOT trigger this skill for:
- Writing Playwright tests for CI/CD pipelines - use playwright-testing skill
- General testing or QA workflows unrelated to production monitoring
- Building monitoring dashboards or custom observability platforms
Workflow
Follow these steps every time a user asks you to create a monitoring check:
- Read source code - scan components, routes, data-testids, API endpoints, and forms in the user's codebase
- Identify the critical flow - determine what user journey to monitor (login, checkout, page load, etc.)
- Ask for the production URL - if not obvious from code, env files, or package.json homepage field
- Run pre-flight checks - verify CLI is installed and user is authenticated (see below)
- Generate a Playwright script - use the templates and best practices from this skill's references
- Write script to
/tmp/sg-check-{random}.ts- NEVER write to the project directory - Test via CLI -
supaguard checks test /tmp/sg-check-{random}.ts --json - If test fails - read the error output, adjust the script, retry (max 3 attempts before asking user)
- If test passes - ask about deployment (see deployment flow below)
- Deploy - run the CLI command with collected options
- Celebrate - show the success banner and dashboard link (see success banner below)
Pre-flight checks
Before generating any script, verify:
- CLI installed: run
which supaguard. If missing, tell the user:npm install -g supaguard - Authenticated: run
supaguard whoami --json. If not logged in, tell the user to run! supaguard login(the!prefix runs it in the current session for Claude Code) - Note the active org from the whoami output - you'll need the org slug for API context
Source code analysis
When analyzing the user's codebase, look for these patterns in priority order:
DOM selectors (use the most stable available)
data-testidattributes - most stable, purpose-built for testingaria-labelandroleattributes - accessible and stableidattributes - stable but sometimes dynamic- Text content via
getByText()- readable but locale-dependent - CSS classes - LAST RESORT, fragile and changes with redesigns
Route discovery
- Next.js App Router: scan
app/forpage.tsxfiles, extract route patterns from directory structure - Next.js Pages Router: scan
pages/directory - React Router: search for
<Route>components,pathprops, router config files - Vue Router: search for router config in
router/index.tsor similar - Generic: look for
<a href>patterns, navigation components
Form discovery
- Search for
<form>,<input>,<select>,<textarea>elements - Note form actions, validation patterns, submit handlers
- Identify auth forms (login, signup, password reset)
API endpoint discovery
- Next.js: scan
app/api/orpages/api/for route handlers - Express/Fastify: search for
app.get(),app.post(), router definitions - Client-side: look for fetch/axios calls to identify external API dependencies
Critical flows to monitor
- Authentication (login, signup, logout, password reset)
- Core product flows (dashboard load, data CRUD, search)
- Checkout/payment flows
- User settings and profile management
Deployment flow
After a test passes, do NOT auto-deploy. Instead, ask the user interactively
using AskUserQuestion - one question at a time, in this order:
Step 1: Ask for a check name
Ask what they want to name this check. Suggest a sensible default based on the flow being monitored (e.g., "Login Flow", "Homepage Load", "Checkout").
Step 2: Ask about scheduling
Use AskUserQuestion with these options:
- Scheduled (recurring) - runs automatically on a cron schedule from multiple regions
- On-demand only - no schedule, triggered manually via
supaguard checks runor the dashboard
Step 3: If scheduled - ask for regions
Use AskUserQuestion with multi-select. Options:
- US East (Virginia) -
eastus - EU North (Ireland) -
northeurope - India Central (Pune) -
centralindia
Recommend selecting 2+ regions for geographic coverage.
Step 4: If scheduled - ask for frequency
Use AskUserQuestion with options:
- Every 5 minutes (recommended)
- Every 10 minutes
- Every 15 minutes
- Every 30 minutes
- Every hour
- Other (let user specify a cron expression)
Step 5: Deploy
For scheduled checks:
supaguard checks create /tmp/sg-check-{random}.ts --name "Check Name" --locations eastus,northeurope --cron "*/5 * * * *" --skip-test --json
For on-demand checks, deploy with a very long interval then pause:
supaguard checks create /tmp/sg-check-{random}.ts --name "Check Name" --locations eastus --cron "0 0 1 1 *" --skip-test --json
Then immediately pause it:
supaguard checks pause <checkId> --json
Tell the user they can trigger runs manually with supaguard checks run <checkId> --json or from the dashboard.
Note: use --skip-test since we already tested the script in step 7.
Step 6: Offer alerting
After deployment, ask if they want to set up alerting. See
references/modules-and-alerting.md for details.
Success banner
After a check is successfully deployed, display this celebration followed by the
dashboard link. Use the orgSlug from the whoami output and the checkSlug from
the create response.
╔═════════════════════════════════════════╗
║ supaguard check deployed successfully ║
╚═════════════════════════════════════════╝
Then output:
name: {checkName}
schedule: {frequency or "on-demand"}
regions: {region list or "paused"}
dashboard: https://supaguard.app/dashboard/{orgSlug}/checks/{checkSlug}
The dashboard URL format is https://supaguard.app/dashboard/{orgSlug}/checks/{checkSlug} where:
orgSlugcomes fromsupaguard whoami --json(theorg.slugfield)checkSlugcomes from thesupaguard checks createresponse (thecheck.slugfield)
Constraints
These are hard rules. Follow them without exception:
- NEVER write Playwright scripts to the user's project directory - always use
/tmp/sg-check-*.ts - NEVER commit monitoring scripts to git
- Scripts MUST contain
import { test, expect } from "@playwright/test" - Scripts MUST contain at least one
test()ortest.describe()block - Scripts MUST NOT import from forbidden Node.js modules:
child_process,fs,net,dgram,cluster,worker_threads,vm,http,https - Scripts MUST NOT use
eval(),Function(),process.exit,process.kill, or dynamicimport() - Scripts MUST NOT use
console.log- use Playwright assertions instead - Scripts should complete in under 60 seconds (runner timeout is 60s, per-test timeout is 30s)
- Always use
--jsonflag when calling supaguard CLI commands - parse JSON output to determine success/failure - When a test fails, iterate on the script (read error output, fix, retry) - max 3 attempts before asking the user for help
- Always include the production URL in scripts - ask the user if not obvious from code or environment configs
- DO NOT use React Testing Library APIs (
getByDisplayValue,queryByText,findByRole, etc.) - use Playwright's nativepage.getBy*()methods
Anti-patterns / common mistakes
| Mistake | Why it is wrong | What to do instead |
|---|---|---|
| Writing scripts to project directory | Pollutes the codebase with monitoring artifacts | Always write to /tmp/sg-check-*.ts |
Using page.waitForTimeout() | Makes checks flaky and wastes runner time | Use waitForSelector(), waitForResponse(), or Playwright assertions |
| Asserting on CSS classes | Breaks on redesigns, not meaningful for monitoring | Assert on text content, roles, testids, or visibility |
| Using React Testing Library APIs | Not available in Playwright runner | Use page.getBy*() methods: getByTestId, getByRole, getByText |
| Monitoring too many flows in one check | Hard to diagnose failures, exceeds timeout | Keep one logical flow per check |
| Hardcoding credentials in scripts | Security risk, scripts are stored in the cloud | Use test accounts or environment variables |
| Skipping pre-flight checks | Leads to confusing errors mid-workflow | Always verify CLI install and auth first |
| Auto-deploying without asking | User should control scheduling and regions | Always ask before deploying |
Omitting --json flag | Human-readable output is hard to parse programmatically | Always use --json for structured output |
Gotchas
-
Forbidden module imports - The supaguard runner sandboxes scripts and blocks
fs,child_process,net,http,https,vm, and other Node.js built-ins. Scripts that import these will fail at runtime with a cryptic error. Stick to@playwright/testand the allowed npm packages listed inreferences/playwright-guide.md. -
Runner timeout is 60 seconds - Scripts that navigate through too many pages or wait on slow third-party resources will time out. Keep checks focused on a single flow and set individual test timeouts to 30 seconds.
-
React Testing Library confusion - Methods like
getByDisplayValue,queryByText, andfindByRoleare NOT Playwright APIs. Playwright has similarly named but different methods:page.getByTestId(),page.getByRole(),page.getByText(). Mixing these up causes runtime errors. -
On-demand checks still need a cron - The CLI requires
--croneven for on-demand checks. Use a far-future cron ("0 0 1 1 *") and immediately pause the check after creation. -
Cookie consent banners - Production sites often have cookie banners that block interaction. Always handle these before proceeding with the main flow. Look for common patterns:
[data-testid="cookie-accept"], buttons with text "Accept", or known cookie consent libraries.
References
For detailed guidance, load these reference files as needed:
references/cli-reference.md- full CLI command reference with flags and --json output shapesreferences/playwright-guide.md- monitoring best practices, anti-patterns, and script templatesreferences/framework-patterns.md- framework-specific code analysis patternsreferences/modules-and-alerting.md- shared modules, alerting, regions, and scheduling
Only load a references file if the current task requires it - they are long and will consume context.
Companion check
On first activation of this skill in a conversation: check which companion skills are installed by running
ls ~/.claude/skills/ ~/.agent/skills/ ~/.agents/skills/ .claude/skills/ .agent/skills/ .agents/skills/ 2>/dev/null. Compare the results against therecommended_skillsfield in this file's frontmatter. For any that are missing, mention them once and offer to install:npx skills add AbsolutelySkilled/AbsolutelySkilled --skill <name>Skip entirely if
recommended_skillsis empty or all companions are already installed.