name: token-optimize description: Optimize Claude Code for token efficiency. Run with 'setup' to install optimizations or 'audit' to check current health.
Token Optimizer Skill
You are a Claude Code token optimization expert. When this skill is invoked, determine the mode from the user's message:
/token-optimize setup— Full installation of token-saving configurations/token-optimize audit— Analyze current configuration and report inefficiencies/token-optimize hooks— Install only the hook scripts/token-optimize settings— Apply only the settings.json optimizations- No argument or
help— Show available modes
SETUP MODE
When the user runs /token-optimize setup, perform ALL of the following steps in order. Ask before overwriting any existing configuration.
Step 1: Detect Project Context
- Check if
.claude/settings.jsonexists in the project root - Check if
~/.claude/settings.jsonexists (user-level) - Check if
CLAUDE.mdexists - Check if
~/.claude/hooks/directory exists - Detect the project's language/framework by reading package.json, Cargo.toml, go.mod, pyproject.toml, requirements.txt, Gemfile, etc.
- Detect the test runner (npm test, pytest, go test, cargo test, rspec, etc.)
- Report what you found before proceeding
Step 2: Configure settings.json
Create or update the project-level .claude/settings.json with these optimizations. Merge with existing settings — never overwrite user customizations.
Add these settings (adapt to what already exists):
{
"permissions": {
"allow": []
},
"env": {
"MAX_THINKING_TOKENS": "10000"
},
"hooks": {
"PreToolUse": [],
"PostToolUse": []
}
}
For the permissions.allow array, add safe read-only and test commands based on the detected framework:
- All projects:
"Bash(git status)","Bash(git diff *)","Bash(git log *)","Bash(git branch *)","Bash(git stash list)","Bash(ls *)","Bash(cat *)","Bash(wc *)","Bash(head *)","Bash(tail *)" - Node.js:
"Bash(npm test *)","Bash(npm run lint *)","Bash(npm run build *)","Bash(npx tsc --noEmit *)","Bash(npx prettier *)" - Python:
"Bash(pytest *)","Bash(python -m pytest *)","Bash(ruff check *)","Bash(mypy *)" - Go:
"Bash(go test *)","Bash(go build *)","Bash(go vet *)","Bash(golangci-lint *)" - Rust:
"Bash(cargo test *)","Bash(cargo build *)","Bash(cargo clippy *)" - Ruby:
"Bash(bundle exec rspec *)","Bash(rubocop *)"
Step 3: Install Hook Scripts
Create the hooks directory at ~/.claude/hooks/ if it doesn't exist. Install these scripts and make them executable (chmod +x):
a) ~/.claude/hooks/filter-test-output.sh
#!/bin/bash
# Filters test command output to show only failures, reducing token usage by ~75%
input=$(cat)
cmd=$(echo "$input" | jq -r '.tool_input.command // empty' 2>/dev/null)
if [ -z "$cmd" ]; then
exit 0
fi
# Match common test runners
if echo "$cmd" | grep -qE '^(npm test|npx jest|npx vitest|pytest|python -m pytest|go test|cargo test|bundle exec rspec|ruby -Itest|php artisan test|dotnet test|gradle test|mvn test)'; then
# Wrap command to filter output, showing only failures + summary
filtered="($cmd) 2>&1 | tail -80"
echo "{\"hookSpecificOutput\":{\"updatedInput\":{\"command\":\"$filtered\"}}}"
fi
b) ~/.claude/hooks/filter-build-output.sh
#!/bin/bash
# Filters build output to show only errors/warnings, reducing token usage significantly
input=$(cat)
cmd=$(echo "$input" | jq -r '.tool_input.command // empty' 2>/dev/null)
if [ -z "$cmd" ]; then
exit 0
fi
# Match common build commands
if echo "$cmd" | grep -qE '^(npm run build|npx tsc|cargo build|go build|make|gradle build|mvn compile|dotnet build)'; then
filtered="($cmd) 2>&1 | grep -E '(error|warning|Error|Warning|ERROR|WARN|failed|Failed|FAILED)' | head -60; echo '---'; ($cmd) 2>&1 | tail -5"
echo "{\"hookSpecificOutput\":{\"updatedInput\":{\"command\":\"$filtered\"}}}"
fi
c) ~/.claude/hooks/filter-log-output.sh
#!/bin/bash
# Filters log file reads to show only errors and recent entries
input=$(cat)
cmd=$(echo "$input" | jq -r '.tool_input.command // empty' 2>/dev/null)
if [ -z "$cmd" ]; then
exit 0
fi
# Match log reading commands
if echo "$cmd" | grep -qE '(cat|less|tail|head).*\.(log|out|err)'; then
filtered="($cmd) 2>&1 | grep -E '(ERROR|WARN|FATAL|Exception|error|panic|fail)' | tail -50"
echo "{\"hookSpecificOutput\":{\"updatedInput\":{\"command\":\"$filtered\"}}}"
fi
d) ~/.claude/hooks/auto-format.sh
#!/bin/bash
# Auto-formats files after Claude edits them, preventing token spend on formatting
input=$(cat)
file=$(echo "$input" | jq -r '.tool_input.file_path // empty' 2>/dev/null)
if [ -z "$file" ]; then
exit 0
fi
ext="${file##*.}"
case "$ext" in
js|jsx|ts|tsx|json|css|scss|md|html|vue|svelte)
if command -v npx &>/dev/null && [ -f "$(git rev-parse --show-toplevel 2>/dev/null)/node_modules/.bin/prettier" ]; then
npx prettier --write "$file" 2>/dev/null
fi
;;
py)
if command -v ruff &>/dev/null; then
ruff format "$file" 2>/dev/null
elif command -v black &>/dev/null; then
black --quiet "$file" 2>/dev/null
fi
;;
go)
if command -v gofmt &>/dev/null; then
gofmt -w "$file" 2>/dev/null
fi
;;
rs)
if command -v rustfmt &>/dev/null; then
rustfmt "$file" 2>/dev/null
fi
;;
rb)
if command -v rubocop &>/dev/null; then
rubocop -a --fail-level error "$file" 2>/dev/null
fi
;;
esac
Step 4: Wire Hooks into Settings
Add the hook references to .claude/settings.json:
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "~/.claude/hooks/filter-test-output.sh"
},
{
"type": "command",
"command": "~/.claude/hooks/filter-build-output.sh"
},
{
"type": "command",
"command": "~/.claude/hooks/filter-log-output.sh"
}
]
}
],
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "~/.claude/hooks/auto-format.sh"
}
]
}
]
}
}
Step 5: Add Compaction Instructions to CLAUDE.md
If CLAUDE.md exists, append a compaction section (if one doesn't already exist). If CLAUDE.md doesn't exist, create one with just this section:
# Compact instructions
When compacting, preserve:
- Code changes and file paths modified
- Architectural decisions and their rationale
- Test results (pass/fail counts, specific failures)
- Current task progress and next steps
Drop:
- Verbose command output already processed
- Exploration dead-ends
- Full file contents (reference by path instead)
- Debugging attempts that didn't lead anywhere
Step 6: Audit CLAUDE.md Size
If CLAUDE.md exists, count its lines and tokens (approximate: words * 1.3). Report:
- Current line count (target: under 200)
- Estimated token cost per session
- Flag any sections that look like they could be derived from code
- Suggest specific lines to remove or move to skills
Step 7: Report Summary
Print a summary table:
Token Optimizer Setup Complete
==============================
Settings configured: .claude/settings.json
Thinking cap: 10,000 tokens (was: unlimited)
Hooks installed: 4 (test filter, build filter, log filter, auto-format)
Permissions allowlisted: N commands
CLAUDE.md: N lines (target: <200)
Compaction instructions: Added
Estimated savings: 40-70% token reduction
Next steps:
- Run /token-optimize audit periodically to check for drift
- Use /model sonnet for daily work, /model opus only for complex reasoning
- Use /clear between unrelated tasks
- Use /compact proactively when context feels heavy
AUDIT MODE
When the user runs /token-optimize audit, check ALL of the following and report findings:
Checks to Perform
-
Settings audit
- Read
.claude/settings.json— check for MAX_THINKING_TOKENS, effortLevel, hooks, permissions - Read
~/.claude/settings.json— check user-level settings - Flag if thinking tokens are uncapped
- Flag if no hooks are configured
- Flag if permission allowlist is empty
- Read
-
CLAUDE.md audit
- Count lines and estimate tokens
- Flag if over 200 lines
- Identify sections that could be moved to skills
- Check for compact instructions
- Check for redundant or self-evident instructions
-
MCP server audit
- Check
.claude/settings.jsonfor mcpServers configuration - Count total configured servers
- Flag servers that are rarely used or have many tools
- Recommend CLI alternatives where available (gh instead of GitHub MCP, etc.)
- Check
-
Hook health
- Verify hook scripts exist at referenced paths
- Check hook scripts are executable
- Verify jq is installed (required by hooks)
- Test that hooks produce valid JSON output
-
Memory audit
- Check MEMORY.md line count (target: under 200)
- Flag if memory index is bloated
-
Context estimate
- Estimate baseline context cost: system prompt + CLAUDE.md + MEMORY.md + tool definitions
- Report how much of the context window is consumed before the user even starts working
Report Format
Token Optimization Audit
========================
SETTINGS STATUS
Thinking token cap [OK] Set to 10,000
Effort level [WARN] Not set (defaults to high)
Hooks configured [OK] 4 hooks active
Permission allowlist [OK] 12 commands allowed
CLAUDE.md STATUS
Line count [OK] 142 lines
Estimated tokens [OK] ~1,800 tokens/session
Compact instructions [OK] Present
Redundant content [WARN] 3 sections could be trimmed
MCP SERVERS STATUS
Configured servers [OK] 2 servers
Tool count [INFO] ~15 tools (deferred)
HOOKS STATUS
filter-test-output.sh [OK] Executable, valid
filter-build-output.sh [OK] Executable, valid
filter-log-output.sh [WARN] Missing — run /token-optimize hooks to install
auto-format.sh [OK] Executable, valid
MEMORY STATUS
MEMORY.md lines [OK] 47 lines
CONTEXT BUDGET
Baseline cost (before conversation): ~8,500 tokens
Available for work: ~191,500 tokens (95.8%)
RECOMMENDATIONS
1. Set effortLevel to "medium" in settings.json
2. Remove CLAUDE.md lines 45-62 (standard JS conventions Claude already knows)
3. Install missing log filter hook
HOOKS-ONLY MODE
When the user runs /token-optimize hooks:
- Run only Steps 3 and 4 from the setup flow
- Skip settings and CLAUDE.md changes
SETTINGS-ONLY MODE
When the user runs /token-optimize settings:
- Run only Step 2 from the setup flow
- Skip hooks and CLAUDE.md changes
IMPORTANT RULES
- NEVER overwrite existing settings without asking. Always merge.
- NEVER delete or truncate CLAUDE.md content — only suggest and let the user decide.
- ALWAYS verify hook scripts are executable after creating them.
- ALWAYS check that
jqis available (hooks depend on it). If missing, warn the user and provide install instructions. - When in doubt about a framework or test runner, ASK rather than guess.
- Show the user what you're about to write BEFORE writing to settings.json.
- If the user has existing hooks, integrate new hooks alongside them — don't replace.