# Traces and spans

A **trace** is the record of an AI call (or a chain of calls). A **span** is one logical unit of work inside that trace.

## Why they matter

Single-output evaluation isn't enough for agents. You need to see every tool call, every retrieval, every nested LLM call to understand what actually happened. Traces are the data structure that makes that visible.

## Shape

```json
{
 "id": "trace-123",
 "name": "answer-question",
 "inputs": {"prompt": "..."},
 "outputs": {"response": "..."},
 "started_at": "...",
 "duration_ms": 1234,
 "spans": [
 {
 "name": "retrieval",
 "kind": "tool",
 "inputs": {...},
 "outputs": {...},
 "duration_ms": 200
 },
 {
 "name": "llm-call",
 "kind": "llm",
 "model": "claude-opus-4-7",
 "inputs": {...},
 "outputs": {...},
 "duration_ms": 850
 }
 ]
}
```

## Span kinds

* **llm** — a call to an LLM
* **tool** — a tool invocation (retrieval, API call, function call)
* **retrieval** — a vector or keyword retrieval
* **chain** — a logical group of nested calls
* **other** — anything else

## Trace evaluation

A trace evaluation runs a scoring config (scorers + judges) over a trace set. See [Trace evaluations](/8.-evaluate-score-the-outputs/trace-evaluations.md).

## Best practices

* **Capture the whole chain.** A trace with only the top-level call is easier to grade than one with all spans, but you can't root-cause failures.
* **Annotate spans.** Add tags (e.g., `production`, `ab-test-v2`) to make filtering tractable later.
* **Capture latency and cost per span.** They're free to ingest; they're invaluable for diagnosis.

## Span-level evaluation

Some scorers and judges are configured per-span — you can ask a deterministic rule "did any span call the destructive API outside the allowed scope?" This is what makes deterministic rules powerful for agents.

## Formal schema

The trace payload is specified by a [JSON Schema artifact](/13.1-sdk-and-apis/trace-schema.md). Validate client-side before posting.

## Where to next

* [Stratix Premium — Traces](/7.-observe-see-whats-happening/traces.md)
* [Trace evaluations](/8.-evaluate-score-the-outputs/trace-evaluations.md)
* [Continuous evaluation](/7.-observe-see-whats-happening/continuous-evaluation.md)
* [Agentic evaluation](/8.-evaluate-score-the-outputs/agentic-evaluation.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/6.-build-wire-your-code/traces-and-spans.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
