# Three experiences (Public · Premium · SDK)

LayerLens Stratix is delivered as three distinct customer-facing experiences. They share one catalog, one judge engine, and one trace pipeline — but each one is shaped for a specific kind of user and a specific kind of question.

## Side by side

|                             | Stratix Public                      | Stratix Premium                            | Python SDK                                                                 |
| --------------------------- | ----------------------------------- | ------------------------------------------ | -------------------------------------------------------------------------- |
| **Where**                   | `stratix.layerlens.ai` — signed out | `stratix.layerlens.ai` — after you sign in | `pip install layerlens --extra-index-url https://sdk.layerlens.ai/package` |
| **Auth**                    | none                                | OAuth (Google, GitHub) + email             | API key (`X-API-Key`)                                                      |
| **Audience**                | researchers, buyers, the curious    | builders, operators, admins                | builders, automation, CI/CD                                                |
| **What you do**             | browse, compare, read               | build, evaluate, govern                    | automate, integrate, batch                                                 |
| **Catalog visibility**      | full public catalog                 | full public + private                      | full public + private                                                      |
| **Private evaluations**     | no                                  | yes                                        | yes                                                                        |
| **Build judges**            | no                                  | yes                                        | yes                                                                        |
| **Trace upload**            | no                                  | yes                                        | yes                                                                        |
| **GEPA judge optimization** | no                                  | yes                                        | yes                                                                        |
| **Org and team management** | no                                  | yes                                        | n/a                                                                        |
| **Cost**                    | free                                | free + ECU PAYG                            | free + ECU PAYG                                                            |

## Stratix Public — the showcase

Anonymous browsing. No account required. Built for researchers, AI buyers, and anyone who wants to know "which model is actually best at X right now."

You can:

* Browse 175+ models with metadata, benchmark scores, and the latest leaderboard
* Browse 52+ benchmarks with descriptions and per-model scores
* Browse 2,000+ public evaluations
* Compare any two models head-to-head across any shared benchmark
* Browse public evaluation spaces — curated bundles of model + dataset + scoring
* Search the entire catalog
* Read the quarterly research reports

You cannot:

* Run a private evaluation on your data
* Save or share work to a personal workspace
* Build judges or scorers
* See trace data

Stratix Public is the doorway. Most users start here, find a model or a benchmark they care about, and continue into Premium when they're ready to evaluate against their own data.

## Stratix Premium — the workspace

Logged-in. Multi-org. Pay-as-you-go on compute. Built for teams shipping AI features.

You can:

* Run **private evaluations** on your own data
* Build **LLM judges** for subjective dimensions and optimize them with **GEPA**
* Upload **traces** and run **trace evaluations** against scorers and judges
* Manage **scorers** (code graders) at the org level
* Run **agentic evaluations** — pre- and post-deployment quality gates for multi-step agents
* Use **BYOK custom models** — register your own OpenAI-compatible endpoint
* Manage **organizations**, multi-org membership, and team seats
* Buy and consume **ECU credits**
* Browse the **Learning Library** and follow **Paths**

The left rail is: Home, Evaluations, Models, Benchmarks, Scorers, Spaces, Agent Evaluation (Traces, Judges), Learning (Library, Paths), Settings, Notifications. ECU balance and Buy Now appear in the header. The in-app **Assistant** offers context-aware help. The **Switch toggle** lets you flip between the Public and Premium catalogs without losing context.

## Python SDK — the automation surface

`pip install layerlens --extra-index-url https://sdk.layerlens.ai/package` (v1.3.0). 11 resources, 138 sample programs.

You can:

* Programmatically create evaluations, judges, scorers, traces
* Run trace evaluations as part of CI/CD
* Sync results into your own data warehouse or BI
* Automate judge GEPA optimization
* Integrate with frameworks (LangChain, LlamaIndex, Haystack, etc.) via instrumentation samples

The SDK is sync-and-async. Both `client.evaluations.create()` and `await client.evaluations.acreate()` work.

## How they work together

A typical team uses all three:

1. **A researcher** browses the public catalog and picks the three candidate models that look strongest on a relevant benchmark.
2. **A builder** logs into Premium, runs a private evaluation on the team's own dataset, and chooses a winner. They build a judge that captures the team's quality bar.
3. **A platform engineer** wires the SDK into CI/CD: every PR that changes a prompt re-runs the evaluation. Judges and scorers gate the merge.
4. **An operator** ingests live traces and runs trace evaluations on a daily cadence. Quality regressions surface in a dashboard before they surface to customers.

## Where to next

* [Stratix Public](/1.-introduction/01-introduction.md)
* [Stratix Premium](https://github.com/LayerLens/gitbook-full/blob/main/05-select/catalog/premium-workspace-overview.md)
* [Python SDK](/6.-build-wire-your-code/sdk-python.md)
* [Getting started](/2.-get-started/02-get-started.md) — choose a path and ship


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/1.-introduction/three-experiences.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
