smolagents LLM analytics installation - Docs
-
1
Install dependencies
Required
Full working examples
See the complete Python example on GitHub. If you're using the PostHog SDK wrapper instead of OpenTelemetry, see the Python wrapper example.
Install the OpenTelemetry SDK, the OpenAI instrumentation, and smolagents.
pip install smolagents openai opentelemetry-sdk posthog[otel] opentelemetry-instrumentation-openai-v2 -
2
Set up OpenTelemetry tracing
Required
Configure OpenTelemetry to auto-instrument OpenAI SDK calls and export traces to PostHog. PostHog converts
gen_ai.*spans into$ai_generationevents automatically.from opentelemetry import trace from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.resources import Resource, SERVICE_NAME from posthog.ai.otel import PostHogSpanProcessor from opentelemetry.instrumentation.openai_v2 import OpenAIInstrumentor resource = Resource(attributes={ SERVICE_NAME: "my-app", "posthog.distinct_id": "user_123", # optional: identifies the user in PostHog "foo": "bar", # custom properties are passed through }) provider = TracerProvider(resource=resource) provider.add_span_processor( PostHogSpanProcessor( api_key="<ph_project_token>", host="https://us.i.posthog.com", ) ) trace.set_tracer_provider(provider) OpenAIInstrumentor().instrument() -
3
Run your agent
Required
Use smolagents as normal. PostHog automatically captures an
$ai_generationevent for each LLM call made through the OpenAI SDK that smolagents uses internally.import os from smolagents import CodeAgent, LiteLLMModel model = LiteLLMModel(model_id="gpt-4o-mini", api_key=os.environ["OPENAI_API_KEY"]) agent = CodeAgent(tools=[], model=model) result = agent.run("Tell me a fun fact about hedgehogs") print(result)Note: If you want to capture LLM events anonymously, omit the
posthog.distinct_idresource attribute. See our docs on anonymous vs identified events to learn more.You can expect captured
$ai_generationevents to have the following properties:Property Description $ai_model The specific model, like gpt-5-mini or claude-4-sonnet $ai_latency The latency of the LLM call in seconds $ai_time_to_first_token Time to first token in seconds (streaming only) $ai_tools Tools and functions available to the LLM $ai_input List of messages sent to the LLM $ai_input_tokens The number of tokens in the input (often found in response.usage) $ai_output_choices List of response choices from the LLM $ai_output_tokens The number of tokens in the output (often found in response.usage) $ai_total_cost_usd The total cost in USD (input + output) [...] See full list of properties -
Verify traces and generations
Recommended
Confirm LLM events are being sent to PostHog
Let's make sure LLM events are being captured and sent to PostHog. Under LLM analytics, you should see rows of data appear in the Traces and Generations tabs.


-
4
Next steps
Recommended
Now that you're capturing AI conversations, continue with the resources below to learn what else LLM Analytics enables within the PostHog platform.
Resource Description Basics Learn the basics of how LLM calls become events in PostHog. Generations Read about the $ai_generation event and its properties. Traces Explore the trace hierarchy and how to use it to debug LLM calls. Spans Review spans and their role in representing individual operations. Anaylze LLM performance Learn how to create dashboards to analyze LLM performance.
Community questions
Ask a question
Was this page useful?
HelpfulCould be better