# Trace evaluations

{% hint style="info" %}
**Available in Stratix Premium.** This surface is part of the logged-in workspace at [stratix.layerlens.ai](https://stratix.layerlens.ai). Stratix Public users can browse the catalog but cannot use this feature.
{% endhint %}

A trace evaluation runs a scoring config (scorers + judges) over a trace set. Use trace evaluations for:

* **Pre- and post-deployment agentic evaluation** — score a curated trace set before shipping
* **Continuous evaluation** — score live production traces on a recurring schedule
* **Ad-hoc inspection** — grade a recent batch you just ingested

URL: [`stratix.layerlens.ai/dashboard/agent-evaluation/trace-evaluations`](https://stratix.layerlens.ai/dashboard/agent-evaluation/trace-evaluations)

## What you can do

* Create a trace evaluation against a trace set
* Apply scorers and judges (any number, any combination)
* Run on demand or on a recurring schedule
* Browse past runs
* Compare runs to a baseline
* Get notified when scores cross thresholds

## Creating a trace evaluation

1. **Pick the trace set** — a saved set, a filter, or a single trace
2. **Pick the scoring config** — scorers + judges
3. **Pick the schedule** — one-off, daily, hourly
4. **Configure thresholds** (optional) — alert when a dimension drops below X
5. **Run / save**

## Reading the results

Each trace evaluation result page shows:

* Top-line score per dimension
* Per-trace verdicts
* Failed traces with span-level root-cause links
* Score-over-time chart (for recurring runs)
* Comparison to baseline (most recent prior successful run)

## Pre-deploy vs continuous patterns

**Pre-deploy:** trace set is a curated test set. Run on a candidate change. Compare to last successful run.

**Continuous:** trace set is your production stream. Run on a schedule. Watch the trend.

## Agentic evaluation

For pre- and post-deployment agentic evaluations, the trace evaluation is the runtime. The criteria mix natural-language assertions, deterministic rules, and judges into one config — see [Concept: Agentic evaluation](/8.-evaluate-score-the-outputs/agentic-evaluation.md).

## Where to next

* [Concept: Continuous evaluation](/7.-observe-see-whats-happening/continuous-evaluation.md)
* [Tutorial: Score live traces](/8.-evaluate-score-the-outputs/04-score-traces.md)
* [Traces](/7.-observe-see-whats-happening/traces.md)
* [Judges](/8.-evaluate-score-the-outputs/judges.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/8.-evaluate-score-the-outputs/trace-evaluations.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
