Add Model Price
Guide for adding or updating model pricing entries in Langfuse. Use this when
editing worker/src/constants/default-model-prices.json,
packages/shared/src/server/llm/types.ts, model matchPattern values,
tokenizer IDs, or pricing tiers.
Purpose
This guide keeps model pricing changes consistent across providers and runtime surfaces so Langfuse can calculate token costs accurately.
How to Use This Skill
- Read references/schema-and-tiers.md for the JSON shape and pricing-tier rules.
- Read references/provider-sources-and-price-keys.md for official pricing URLs, per-token conversion, and provider-specific usage keys.
- Read references/match-patterns.md when you need to add or expand regex coverage.
- Read references/workflow-and-validation.md for the end-to-end edit workflow, validation rules, and common mistakes.
Deterministic Helpers
- Validate the pricing file:
node .agents/skills/add-model-price/scripts/validate-pricing-file.mjs - Test a regex directly:
node .agents/skills/add-model-price/scripts/test-match-pattern.mjs --pattern '(?i)^(openai/)?(gpt-4o)$' --accept gpt-4o openai/gpt-4o --reject gpt-4o-mini - Test the regex for an existing model entry:
node .agents/skills/add-model-price/scripts/test-match-pattern.mjs --model gpt-4o --accept gpt-4o openai/gpt-4o --reject gpt-4o-mini
Quick Start Checklist
Adding a New Model
- Gather official pricing from the provider documentation
- Generate a lowercase UUID for the model entry
- Create a
matchPatternthat covers supported provider formats - Add at least one default pricing tier
- Insert the pricing entry into
worker/src/constants/default-model-prices.json - Update
packages/shared/src/server/llm/types.tsif the model should be selectable in playground or evaluation flows - Validate the JSON after editing
Updating an Existing Model
- Update the relevant prices, keys, tiers, or regexes
- Refresh
updatedAtto today's ISO-8601 timestamp - Validate the JSON after editing
Target Files
- Pricing data:
worker/src/constants/default-model-prices.json - Shared model types:
packages/shared/src/server/llm/types.ts - Validation logic:
packages/shared/src/features/model-pricing/validation.ts - Matching logic:
packages/shared/src/server/pricing-tiers/matcher.ts - Tests:
worker/src/__tests__/pricing-tier-matcher.test.ts
Data Structure
Complete Model Entry Schema
{
"id": "uuid-generated-with-uuidgen",
"modelName": "model-name-identifier",
"matchPattern": "(?i)^regex-pattern$",
"createdAt": "ISO-8601-timestamp",
"updatedAt": "ISO-8601-timestamp",
"tokenizerConfig": null,
"tokenizerId": "claude|openai|null",
"pricingTiers": [
{
"id": "model-uuid_tier_default",
"name": "Standard",
"isDefault": true,
"priority": 0,
"conditions": [],
"prices": {
"input": 0.000005,
"output": 0.000025
}
}
]
}
Required Fields
| Field | Type | Description |
|---|---|---|
id | string | Unique lowercase UUID |
modelName | string | Primary model identifier |
matchPattern | string | Regex for matching model names |
createdAt | string | ISO-8601 timestamp set on creation |
updatedAt | string | ISO-8601 timestamp refreshed whenever the entry changes |
pricingTiers | array | At least one pricing tier |
Optional Fields
| Field | Type | Default | Description |
|---|---|---|---|
tokenizerId | string | null | "claude", "openai", or null |
tokenizerConfig | object | null | Custom tokenizer settings |
Pricing Tier Structure
Default Tier
Every model must have exactly one default tier:
{
"id": "{model-id}_tier_default",
"name": "Standard",
"isDefault": true,
"priority": 0,
"conditions": [],
"prices": {}
}
Rules for the default tier:
isDefaultmust betrueprioritymust be0conditionsmust be[]
Additional Tiers
Use extra tiers for context-window or usage-based pricing:
{
"id": "uuid-for-tier",
"name": "Large Context (>200K)",
"isDefault": false,
"priority": 1,
"conditions": [
{
"usageDetailPattern": "(input|prompt|cached)",
"operator": "gt",
"value": 200000,
"caseSensitive": false
}
],
"prices": {}
}
Supported condition operators: gt, gte, lt, lte, eq, neq
Official Pricing Sources
Always fetch pricing from the provider's official docs before editing. Do not infer or estimate missing values.
| Provider | Source |
|---|---|
| Anthropic Claude | https://platform.claude.com/docs/en/about-claude/pricing |
| OpenAI | https://openai.com/api/pricing/ |
| Google Gemini | https://ai.google.dev/pricing |
| AWS Bedrock | https://aws.amazon.com/bedrock/pricing/ |
| Azure OpenAI | https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/ |
Gather:
- Base input token price per million tokens
- Output token price per million tokens
- Cache write price when supported
- Cache read price when supported
- Any long-context pricing tiers
- All model ID formats that Langfuse should match
Price Conversion
Values in default-model-prices.json are per token, not per million tokens.
| Provider Price | JSON Value |
|---|---|
$5 / MTok | 5e-6 |
$25 / MTok | 25e-6 |
$0.50 / MTok | 0.5e-6 |
$6.25 / MTok | 6.25e-6 |
Formula:
price_per_token = price_per_mtok / 1_000_000
Common Price Keys by Provider
Anthropic Claude Models
{
"input": "<base_input_price>",
"input_tokens": "<base_input_price>",
"output": "<output_price>",
"output_tokens": "<output_price>",
"cache_creation_input_tokens": "<cache_write_price>",
"input_cache_creation": "<cache_write_price>",
"cache_read_input_tokens": "<cache_read_price>",
"input_cache_read": "<cache_read_price>"
}
OpenAI Models
{
"input": "<input_price>",
"input_cached_tokens": "<cached_input_price>",
"input_cache_read": "<cached_input_price>",
"output": "<output_price>"
}
Google Gemini Models
{
"input": "<input_price>",
"input_modality_1": "<input_price>",
"prompt_token_count": "<input_price>",
"promptTokenCount": "<input_price>",
"input_cached_tokens": "<cached_price>",
"cached_content_token_count": "<cached_price>",
"output": "<output_price>",
"output_modality_1": "<output_price>",
"candidates_token_count": "<output_price>",
"candidatesTokenCount": "<output_price>"
}
Match Pattern Examples
Anthropic Claude: API + Bedrock + Vertex
(?i)^(anthropic\/)?(claude-opus-4-6|(eu\\.|us\\.|apac\\.)?anthropic\\.claude-opus-4-6-v1(:0)?|claude-opus-4-6)$
Matches:
claude-opus-4-6anthropic/claude-opus-4-6anthropic.claude-opus-4-6-v1:0us.anthropic.claude-opus-4-6-v1:0claude-opus-4-6
With Version Date
(?i)^(anthropic\/)?(claude-opus-4-5-20251101|(eu\\.|us\\.|apac\\.)?anthropic\\.claude-opus-4-5-20251101-v1:0|claude-opus-4-5@20251101)$
OpenAI
(?i)^(openai\/)?(gpt-4o)$
Google Gemini
(?i)^(google\/)?(gemini-2.5-pro)$
Pattern Components
| Component | Purpose | Example |
|---|---|---|
(?i) | Case-insensitive match | gpt-4o and GPT-4O |
^...$ | Full-string match | Avoids partial matches |
(provider\/)? | Optional provider prefix | openai/gpt-4o |
| `(eu\. | us\. | apac\.)?` |
(:0)? | Optional version suffix | Bedrock model versions |
@date | Vertex AI version format | claude-3-5-sonnet@20240620 |
Step-by-Step Workflow
1. Fetch Official Pricing
Open the official provider pricing page and capture the model's input, output, cache write, and cache read prices.
2. Generate a Lowercase UUID
uuidgen
Convert the output to lowercase before using it.
3. Create the JSON Entry
Example for a model with $5 input, $25 output, $6.25 cache write, and $0.50 cache read:
{
"id": "13458bc0-1c20-44c2-8753-172f54b67647",
"modelName": "claude-opus-4-6",
"matchPattern": "(?i)^(anthropic\/)?(claude-opus-4-6|(eu\\.|us\\.|apac\\.)?anthropic\\.claude-opus-4-6-v1(:0)?|claude-opus-4-6)$",
"createdAt": "2026-03-09T00:00:00.000Z",
"updatedAt": "2026-03-09T00:00:00.000Z",
"tokenizerConfig": null,
"tokenizerId": "claude",
"pricingTiers": [
{
"id": "13458bc0-1c20-44c2-8753-172f54b67647_tier_default",
"name": "Standard",
"isDefault": true,
"priority": 0,
"conditions": [],
"prices": {
"input": 5e-6,
"input_tokens": 5e-6,
"output": 25e-6,
"output_tokens": 25e-6,
"cache_creation_input_tokens": 6.25e-6,
"input_cache_creation": 6.25e-6,
"cache_read_input_tokens": 0.5e-6,
"input_cache_read": 0.5e-6
}
}
]
}
4. Insert the Entry
Add the entry to the JSON array in
worker/src/constants/default-model-prices.json. Keep related models grouped
together.
5. Update Shared Model Types When Needed
If the model should be available in the playground or LLM-as-judge flows, add
it to the correct array in packages/shared/src/server/llm/types.ts.
Model arrays include:
anthropicModelsopenAIModelsvertexAIModelsgoogleAIStudioModels
Do not add a new model as the first entry in one of these arrays. The first entry is used as a default model in some test or evaluation paths and newer models may not be available to all users yet.
6. Validate the Change
jq . worker/src/constants/default-model-prices.json > /dev/null
You can also inspect a specific entry:
jq '.[] | select(.modelName == "claude-opus-4-6")' worker/src/constants/default-model-prices.json
Multi-Tier Example
For models with long-context pricing:
{
"id": "uuid-here",
"modelName": "model-name",
"matchPattern": "...",
"pricingTiers": [
{
"id": "uuid-here_tier_default",
"name": "Standard",
"isDefault": true,
"priority": 0,
"conditions": [],
"prices": {
"input": 5e-6,
"output": 25e-6
}
},
{
"id": "uuid-for-large-context-tier",
"name": "Large Context (>200K)",
"isDefault": false,
"priority": 1,
"conditions": [
{
"usageDetailPattern": "(input|prompt|cached)",
"operator": "gt",
"value": 200000,
"caseSensitive": false
}
],
"prices": {
"input": 10e-6,
"output": 37.5e-6
}
}
]
}
Validation Rules
- Exactly one default tier must have
isDefault: true - The default tier must have
priority: 0 - The default tier must have
conditions: [] - Non-default tiers must have
priority > 0 - Non-default tiers must have at least one condition
- Priorities must be unique within a model
- Tier names must be unique within a model
- Each tier must contain at least one price
- All tiers must expose the same usage-type keys
- Regex patterns must be valid and safe
Common Mistakes
Guessing Instead of Using Official Pricing
Wrong:
{
"cache_creation_input_tokens": "input_price * 1.25"
}
Correct:
{
"cache_creation_input_tokens": 6.25e-6
}
Using MTok Values Directly
Wrong:
{
"input": 5
}
Correct:
{
"input": 5e-6
}
Missing the Default Tier Suffix
Wrong:
{
"id": "some-uuid"
}
Correct:
{
"id": "model-uuid_tier_default"
}
Invalid Regex Escaping
Wrong:
{
"matchPattern": "anthropic.claude"
}
Correct:
{
"matchPattern": "anthropic\\.claude"
}
Forgetting to Update updatedAt
Wrong:
{
"updatedAt": "2025-12-12T15:00:06.513Z"
}
Correct:
{
"updatedAt": "2026-03-09T00:00:00.000Z"
}
Testing Model Matching
After adding a model, verify that the regex matches the intended provider variants:
const pattern = new RegExp(matchPattern);
console.log(pattern.test("claude-opus-4-6")); // true
console.log(pattern.test("anthropic/claude-opus-4-6")); // true
console.log(pattern.test("anthropic.claude-opus-4-6-v1:0")); // true
console.log(pattern.test("us.anthropic.claude-opus-4-6-v1:0")); // true
Existing Model Templates
Use nearby entries as templates:
claude-opus-4-5-20251101for Anthropic multi-provider patternsgpt-4ofor a simple OpenAI patterngemini-2.5-profor a multi-tier Gemini entry