# Continuous evaluation

Continuous evaluation scores live production traces on a recurring schedule. Pre- and post-deployment evaluations catch regressions before merge; continuous evaluation catches them after.

## Why it matters

* Production traffic shifts; what worked in pre-deploy may fail in the wild
* Model providers update their backends; your scores may move quietly
* Prompt or feature changes that slipped through CI surface here

Without continuous evaluation, AI quality is a post-incident discovery. With it, regressions surface before customer-facing impact.

## The shape

1. **Ingest live traces** — from your application, via SDK or API
2. **Define a trace evaluation** — scorers + judges
3. **Schedule it** — daily, hourly, per-batch
4. **Watch the trend** — score-over-time per dimension
5. **Get notified on drift** — thresholds, alerts

## Pre-deploy vs continuous patterns

| Aspect       | Pre-deploy agentic  | Continuous                  |
| ------------ | ------------------- | --------------------------- |
| Trace set    | curated, fixed      | rolling, sampled production |
| Cadence      | per-PR, per-release | daily, hourly               |
| Bar          | block merge         | alert on drift              |
| Cost profile | bursty              | steady                      |

## Sampling

For high-volume traffic, sample. Sample bias is the trap — production distribution drifts, and sampling drifts with it. Re-validate the sampling strategy periodically.

## Cost management

Continuous evaluation can become a big consumer of ECU. Two levers:

* **Sample less of low-importance traffic, more of high-stakes**
* **Use scorers liberally; reserve judges for residual subjective bar**

## Where to next

* [Trace evaluations (Premium)](/8.-evaluate-score-the-outputs/trace-evaluations.md)
* [Stratix Premium — Notifications](https://github.com/LayerLens/gitbook-full/blob/main/13-reference/sdk-python/notifications.md)
* [Use case: Continuous evaluation](/7.-observe-see-whats-happening/continuous-evaluation.md)
* [Tutorial: Score live traces](/8.-evaluate-score-the-outputs/04-score-traces.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/7.-observe-see-whats-happening/continuous-evaluation.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
