# What is LayerLens Stratix?

LayerLens Stratix is a complete platform for evaluating AI — from the moment you pick a model, through the build, into production, and across an organization. The platform is structured as **two product lines** (Model Evaluations and Agentic Evaluations) delivered through **three customer-facing experiences** (Stratix Public, Stratix Premium, and the Python SDK) that share one catalog, one judge engine, and one trace pipeline.

If you came in via the marketing site, you've seen the two product lines. If you're starting from this documentation, you'll mostly hear about the three experiences. They reconcile: the products are *what* you can do; the experiences are *where* you do it.

## High Level Overview

Most AI teams ship by intuition. They pick a model because it's popular. They iterate prompts because the last few examples felt better. They watch CSAT in production because that's the only signal that survives. By the time they know something's wrong, customers have already noticed.

Stratix gives you a real signal. Three of them, actually:

1. **Public catalog** — 175+ models scored against 52+ benchmarks, refreshed quarterly. See what's actually best at the task you care about, not what's loudest.
2. **Private workspace** — run evaluations on your own data, build LLM judges that match your team's quality bar, and score live traces. Stop guessing whether the new model is better; measure it.
3. **CI/CD gates** — wire evaluations into your pipeline. Block regressions. Promote winners. Ship with the same confidence you ship traditional code.

## The two product lines (what you can do)

* **Model Evaluations** — what the public catalog showcases: model + benchmark + score. Use it to pick the right model and to track how the frontier is moving.
* **Agentic Evaluations** — pre- and post-deployment quality gates for multi-step AI agents: assertions, deterministic rules, and LLM judges combined into a verdict + root-cause report.

Both share the same trace pipeline, the same judge engine, the same scorer library. You don't pick "one product" — you pick the workflow.

## What ships today

* **Models:** 200+ in the catalog, including GPT-5.3, Claude Opus 4.6, Gemini 3.1 Pro/Flash.
* **Benchmarks:** 52+ public benchmarks (MMLU, HumanEval, GSM8K, AGIEval, MATH, etc.).
* **Public evaluations:** 2,000+ runs visible at `stratix.layerlens.ai`.
* **Compare models:** head-to-head model comparisons across any benchmark.
* **Evaluation spaces:** public and private workspaces bundling models + datasets + scoring.
* **Scorers:** code graders for objective dimensions.
* **Judges:** LLM-as-a-judge for subjective dimensions, with the GEPA optimizer.
* **Traces:** upload trace JSON; run trace-evaluations against scorers and judges.
* **Learning library + paths:** structured curriculum on AI evaluation.
* **ECU credits:** pay-as-you-go billing for compute-intensive runs.
* **BYOK custom models:** point at your own OpenAI-compatible endpoint.
* **Multi-org:** users belong to multiple orgs; switch from the picker.
* **Auth:** GitHub OAuth, Google OAuth, email/password.
* **Search, notifications, onboarding wizard.**

## Where to next

* [Three experiences](/1.-introduction/three-experiences.md) — Public, Premium, SDK side-by-side
* [Agentic evaluations](/4.1-general-use-cases/agentic-evals-overview.md) — the pre- and post-deployment workflow
* [Pricing](/1.-introduction/01-introduction.md) — what each tier includes
* [Getting started](/2.-get-started/02-get-started.md) — pick a path and ship


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/1.-introduction/what-is-layerlens-stratix.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
