Distributed Tracing
Visualize the full execution tree of your AI agents — every LLM call, tool call, and retrieval step.
What is Tracing?
A trace represents a single end-to-end execution of your agent. Each trace contains one or more spans — individual steps like LLM calls, tool invocations, or retrieval operations.
Traces give you full observability into:
- Execution order and nesting (parent → child spans)
- Timing and duration of every step
- Token usage and cost per LLM call
- Input/output data for debugging
- Error tracking with stack traces
Quick Example
Wrap your agent execution in a trace, then create spans for each step. See the code panel → for a complete example.
Start a trace
Call startTrace() with a name and optional input. The SDK stores the trace ID internally.
Create spans for each step
Call startSpan() for each LLM call, tool call, or retrieval. Spans auto-link to the active trace.
End spans with results
Call endSpan() with output, token counts, and cost. The platform calculates duration automatically.
End the trace
Call endTrace() with the final output and status.
Viewing Traces
Go to Dashboard → Traces to see all traces. Click any trace to see the Tree View (execution graph) or Waterfall (timeline). Click any span for full input/output details.
trace_id and span_id fields, so you can correlate events with the exact execution step that generated them.