name: ai-prompt-manager description: Expert assistant for managing AI prompts, features, and configuration in the KR92 Bible Voice AI system. Use when creating AI prompts, configuring AI features, managing prompt versions, setting up AI bindings, or working with AI pricing and models. Supports multiple vendors and models for feature flexibility.
AI Prompt Manager
Quick Start
Core Workflow
- Create feature → Define what AI capability is needed
- Create prompt template → Write system/user prompts with
{{variables}} - Create prompt version → Implement the template (allows versioning)
- Bind to environment → Connect prompt to dev/stage/prod
- Configure provider → Choose vendor and model
- Test via Admin panel → Validate response and cost
For SQL patterns, see sql-patterns.md.
Database Schema (Essentials)
| Table | Purpose |
|---|---|
ai_features | Feature registry (key, description) |
ai_prompt_templates | Prompt structure (task, name) |
ai_prompt_versions | Prompt variants with {{variables}} |
ai_prompt_bindings | Link prompt version to environment |
ai_feature_bindings | Link feature to vendor/model/env |
ai_pricing | Cost per vendor/model |
ai_usage_logs | Track calls, tokens, cost per user |
Full schema: See Docs/context/db-schema-short.md.
Prompt Design
Variable Substitution
Use {{variable}} syntax for dynamic content:
system_prompt: "You are a {{role}} assistant for Bible study."
user_prompt_template: "Analyze: {{verse_reference}}"
// At call time
await getPrompt('my_feature', {
role: 'theological scholar',
verse_reference: 'John 3:16'
}, 'prod');
Output Schema
Define expected output structure to validate responses:
{
"type": "object",
"properties": {
"summary": {"type": "string"},
"insights": {
"type": "array",
"items": {"type": "string"}
},
"references": {
"type": "array",
"items": {"type": "string"}
}
}
}
Temperature & Tokens
- Temperature: 0.0–0.3 (factual), 0.4–0.7 (balanced), 0.8–1.0 (creative)
- max_tokens: Set based on expected output length
- Verse lookup: ~50 tokens
- Short analysis: ~200 tokens
- Full commentary: ~1000+ tokens
See providers.md for full guidance.
Vendor & Model Configuration
Vendors: lovable, openai, anthropic, openrouter
Models are vendor-specific and change over time. Always:
- Check current availability in the API docs
- Test in dev environment first
- Configure pricing in
ai_pricingbefore promoting to prod
See providers.md for selection strategy.
Environment Strategy
- dev – Test new prompts, experiment with vendors
- stage – Validate cost estimates, pre-production testing
- prod – Stable, cost-optimized features
Always follow: dev → stage → prod progression.
Testing AI Features
- Go to Admin panel → AI section → Testaus tab
- Select feature
- Input test variables
- Review response, tokens, cost estimate
- Iterate on prompt if needed
Monorepo Integration
AI features work across workspace apps:
- raamattu-nyt (main Bible app) – Uses most AI features
- widgetizer (embed service) – Limited AI features
- Edge Functions – Orchestrate calls in
ai-orchestrator/
All share same bible_schema database and configuration.
Skills Handoff
- Quota/plan limits → See
subscription-systemskill - Cost optimization → See
performance-auditorskill - Edge Functions → See
edge-function-generatorskill
References
- sql-patterns.md – Common SQL workflows
- providers.md – Vendor/model selection and parameters
Docs/06-AI-ARCHITECTURE.md– Full system designDocs/context/db-schema-short.md– Database schema detailsDocs/context/supabase-map.md– Edge Functions & access matrix