# Agent Evaluation

{% hint style="info" %}
**Available in Stratix Premium.** This surface is part of the logged-in workspace at [stratix.layerlens.ai](https://stratix.layerlens.ai). Stratix Public users can browse the catalog but cannot use this feature.
{% endhint %}

The Agent Evaluation section in the Premium left rail bundles three surfaces:

* **Traces** — upload, browse, and inspect agent traces
* **Judges** — build and manage LLM-as-a-judge graders
* **Trace evaluations** — score traces with scorers and judges

These three surfaces compose the full agentic-evaluation workflow.

## What's an agent trace

A trace is a record of an AI call (or chain of calls): inputs, outputs, every tool call, every span, latencies, costs, errors. Stratix's trace pipeline ingests trace JSON and stores it for inspection and evaluation.

## The three surfaces

### Traces

Inspect individual traces. See [Traces](/7.-observe-see-whats-happening/traces.md).

### Judges

Build LLM judges for subjective dimensions. See [Judges](/8.-evaluate-score-the-outputs/judges.md).

### Trace evaluations

Run a scoring config (scorers + judges) over a trace set. See [Trace evaluations](/8.-evaluate-score-the-outputs/trace-evaluations.md).

## How they combine for agentic evaluation

A complete agentic evaluation:

1. **Capture trace set** — upload representative traces (or stream them from your application)
2. **Define criteria** — natural-language assertions, deterministic rules, LLM judges
3. **Run trace evaluation** — Stratix grades every trace against every criterion
4. **Read verdict + root-cause** — failure tied back to the trace, span, and decision

## Where to next

* [Traces](/7.-observe-see-whats-happening/traces.md)
* [Judges](/8.-evaluate-score-the-outputs/judges.md)
* [Trace evaluations](/8.-evaluate-score-the-outputs/trace-evaluations.md)
* [Overview: Agentic evaluations](/4.1-general-use-cases/agentic-evals-overview.md)
* [Concept: Agentic evaluation](/8.-evaluate-score-the-outputs/agentic-evaluation.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/7.-observe-see-whats-happening/agent-evaluation.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
