name: hook-rule-interviewer
description: Interview the user to design a project-level Hookify V2 or Pi hook rule set for a repository. Use this whenever the user wants help defining .pi/hookify.*.local.yaml rules, choosing on / when / do / respond, designing repo-wide guardrails, approvals, command or file safety checks, prompt transforms, rollout policy, or asks things like "design my hook rules", "what hooks should I enable", "repo guardrails", "制定项目级 hook 规则", or "帮我设计项目级 guardrails", even if they do not mention Hookify by name.
Hook Rule Interviewer
Help the user turn vague guardrail ideas into a concrete project-level hook policy.
This skill is interview-first. Do not jump straight into writing rule files unless the user explicitly asks for implementation. Your main job is to:
- inspect the repo enough to understand risks and workflows
- run a short focused interview
- produce a clear hook policy and rule matrix
- only then draft concrete Hookify or Pi hook rules if asked
Goal
By the end of the interview, produce a project-level hook plan that answers:
- which event surfaces matter for this repo
- what each hook should do
- where the user wants warnings vs blocking vs transforms vs approvals
- what rollout strategy and exceptions the team wants
Read First
Before asking questions, quickly inspect the repository for signals that reduce guesswork:
- Existing hook or policy files:
.pi/hookify*.yaml.pi/settings.json.claude/,.agents/,.github/,.husky/, or similar automation folders
- Project shape and tooling:
package.json,pyproject.toml,Cargo.toml,go.mod,DockerfileREADME.md,docs/, workflow files, test scripts
- Risk-heavy areas:
- deploy or release scripts
- secrets or env handling
- destructive shell usage
- generated files, lockfiles, or migrations
- Read
references/event-playbook.mdfrom this skill when you need a quick mapping from repo concern to Pi hook surface. - Read
references/output-template.mdbefore presenting the final interview result so your structure stays consistent.
Repo-Specific Fit
When this skill is used inside pi-hookify, bias your recommendations toward the current Hookify V2 design:
- YAML rule files under
.pi/ - the
on / when / do / respondDSL - Pi-native event names such as
input,tool_call,tool_result,before_agent_start,context, anduser_bash - no backward-compatibility assumptions for the old markdown/frontmatter format
If the user wants implementation after the interview, draft rules that match the current V2 schema rather than proposing legacy Hookify formats.
Interview Strategy
Keep the interview tight. Ask only what the repo cannot answer.
Prefer AskUserQuestion when you need structured tradeoffs, especially for:
- rollout mode
- warning vs block behavior
- approval policy
- scope or priority tradeoffs
- per-tool or per-team preferences
Always cover
- Primary goal
- What is the user optimizing for: safety, compliance, review quality, workflow consistency, release control, or productivity?
- Hook surfaces
- Which surfaces matter:
input,tool_call,tool_result,before_agent_start,context,user_bash,session_before_*,before_provider_request, or observe-only audit hooks?
- Which surfaces matter:
- Enforcement mode
- For each important surface, should the system notify, ask, request approval, transform, patch, block, or only observe?
- Rollout mode
- Should the team start with audit or warn-only, or go straight to blocking for high-risk cases?
- Exceptions
- Which files, commands, users, tools, branches, or workflows need exceptions?
Ask when relevant
- If the repo has release or deploy workflows: ask about release gating and approval boundaries.
- If secrets or credentials are present: ask whether detection is advisory or blocking.
- If the repo has multiple languages or toolchains: ask whether rules should be global or scoped by tool or file type.
- If the repo already has existing automation: ask whether Hookify should complement it or replace parts of it.
- If the team seems unsure about strictness: propose a staged rollout and ask them to choose it.
Recommended Question Order
- What failures or mistakes are they trying to prevent?
- What behaviors should always be intercepted?
- Which ones should be warn-only at first?
- Which tools or files are most sensitive?
- What exceptions are legitimate and frequent?
- How should the team validate the policy before broad rollout?
Output Contract
Always present your interview result using this structure:
Project Hook Policy BriefHook Coverage MatrixCandidate Rule SetRollout PlanOpen Questions / Risks
Use the template in references/output-template.md.
Hook Coverage Matrix Rules
For each high-signal hook, specify:
- event name
- target scope or trigger
- intended behavior
- severity or rollout mode
- rationale
- exceptions or notes
Be concrete. The matrix should be close enough that another agent can implement it without redoing the interview.
Candidate Rule Set Guidance
When proposing rules:
- prefer a small number of high-signal rules over a noisy long list
- separate audit rules from blocking rules
- keep names stable and descriptive
- group by event family and business purpose
- call out which rules belong in project scope vs user-local scope
If the user asks for actual Hookify V2 rules, map the matrix into YAML rule files under .pi/, using the current on / when / do / respond rule shape.
Decision Heuristics
Use these defaults unless the user gives a better repo-specific answer:
- start with audit or warn-only for medium-risk workflow issues
- block or require approval for destructive shell, secret leakage, irreversible release actions, or branch-critical operations
- use
before_agent_startandcontextsparingly; reserve them for policy framing and context cleanup - use
tool_callfor the majority of concrete enforcement - use
tool_resultwhen the team wants annotation, sanitization, or structured post-processing - use
user_bashonly when the team cares about direct!shell behavior from the operator - keep observe-only hooks for audit trails, metrics, and gradual rollout learning
If the User Wants Implementation
After the interview is accepted:
- restate the approved rule matrix compactly
- identify which files or directories the new rules should live in
- draft the actual YAML rules in
.pi/ - suggest a validation path such as a smoke test, a dry run, or a staged rollout
Do not silently implement while requirements are still fuzzy.
Success Signals
You are done when:
- the user can see which hooks matter and why
- each important hook has an intended behavior and rollout mode
- exceptions are captured explicitly
- the output is implementation-ready or very close
- another agent could take the interview result and build the rules without repeating discovery