# Public vs Premium — by user intent

This page answers "what do I do with each tier" by walking through the six user intents that map to the [Stratix Workflow](/1.-introduction/the-stratix-workflow.md). Each intent gets a same-axis comparison between **Stratix Public** (free, anonymous) and **Stratix Premium** (logged-in workspace).

## Select — pick the model

> "Which model should I use to build this feature?"

| Surface     | What you can do                                                                                                                                                                                                                                                                                | Limitation                                                                                                                                                           |
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Public**  | Browse the [leaderboard](/5.-select-pick-the-model/leaderboard.md). Compare candidates head-to-head on the [public benchmarks catalog](/5.-select-pick-the-model/benchmarks-catalog.md). Test a prompt against any catalog model (rate-limited, anonymous).                                    | Public benchmarks are general-capability. They won't decide for your specific domain (pricing in your tariff, citing your jurisdiction, summarizing your documents). |
| **Premium** | Everything Public does — and your dashboard's catalog augments with **BYOK custom models** you've registered. Author **custom benchmarks** from your own data and run any candidate model against them. The selection narrowed by Public gets confirmed by a domain-aligned private benchmark. | None for the intent.                                                                                                                                                 |

## Build / Instrument — wire your code

> "How do I get observability into what my AI is actually doing?"

| Surface     | What you can do                                                                                                                                                                                                                                                                           | Limitation                                                                                            |
| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
| **Public**  | Read the docs. Use the SDK's free tier (anonymous test calls).                                                                                                                                                                                                                            | No persistent traces, no organization-scoped workspace.                                               |
| **Premium** | Install the Python SDK (`pip install layerlens --extra-index-url https://sdk.layerlens.ai/package`) with your API key. Use `@trace`, `span()`, `instrument_openai()`, `instrument_anthropic()`, framework-specific callbacks (LangChain handler, etc.). Traces persist in your workspace. | Some framework adapters in private preview — see [Integrations](/1.-introduction/01-introduction.md). |

## Observe — see real production behavior

> "What's my AI actually doing in production?"

| Surface     | What you can do                                                                                                                                                                                  | Limitation                            |
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------- |
| **Public**  | n/a — Public is for catalog browsing, not production observability.                                                                                                                              | No persistent trace ingestion at all. |
| **Premium** | Ingest traces from any application via SDK, CLI, or REST. Browse, search, and filter traces in the dashboard. Drill into individual traces with full span trees. Real-time and historical views. | None.                                 |

## Evaluate — score the outputs

> "Is this output good? Against my benchmark, my rubric, my labels?"

| Surface     | What you can do                                                                                                                                                                                                                    | Limitation                                                                    |
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- |
| **Public**  | View public evaluations (2,000+ pre-run scoring runs across the catalog). Read methodology. Reproduce scores in your own workspace if you upgrade.                                                                                 | Cannot run an evaluation on your data. Cannot author or run scorers / judges. |
| **Premium** | Run private evaluations on your benchmarks. Author **LLM-backed scorers** (model + prompt) reusable across evaluations. Author **LLM judges** (versioned, GEPA-tunable). Apply judges to traces directly via **trace evaluation**. | None.                                                                         |

## Improve — tune prompts, judges, models

> "My evaluation isn't where I need it. What do I tune?"

| Surface     | What you can do                                                                                                                                                                                                                                          | Limitation                             |
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------- |
| **Public**  | n/a — improvement is workflow-internal.                                                                                                                                                                                                                  | No iteration loop available on Public. |
| **Premium** | Iterate on prompts and re-evaluate against the same benchmark. Run **GEPA optimization** to tune judges against labeled examples — typically lifts judge agreement-with-humans from 60–70% to 85–95%. Maintain prompt versions as first-class artifacts. | None.                                  |

## Govern — enforce gates in CI/CD and across the org

> "How do I make sure regressions never reach production?"

| Surface     | What you can do                                                                                                                                                                                                                                                                                                                   | Limitation      |
| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- |
| **Public**  | n/a — governance requires your team and your CI/CD.                                                                                                                                                                                                                                                                               | Not applicable. |
| **Premium** | Wire evaluations into CI/CD as quality gates (drop-in [GitHub Actions](/6.-build-wire-your-code/cicd-github-actions.md), GitLab, Buildkite recipes). Run **continuous trace evaluation** on production samples with threshold alerts. SSO + role-based access (Enterprise tier). Audit log of every evaluation (Enterprise tier). | None.           |

## Summary table

| Intent     |    Public   | Premium |
| ---------- | :---------: | :-----: |
| Select     |      ●      |    ●    |
| Instrument | (read-only) |    ●    |
| Observe    |      —      |    ●    |
| Evaluate   |   (browse)  |    ●    |
| Improve    |      —      |    ●    |
| Govern     |      —      |    ●    |

## Where to next

* [The Stratix Workflow](/1.-introduction/the-stratix-workflow.md) — the six-stage spine these intents map to
* [Stratix Public](/1.-introduction/01-introduction.md) — full Public surface
* [Stratix Premium](https://github.com/LayerLens/gitbook-full/blob/main/05-select/catalog/premium-workspace-overview.md) — full Premium surface
* [Public vs Premium feature matrix](/5.-select-pick-the-model/public-vs-premium-1.md) — feature-by-feature table
* [Pricing](/1.-introduction/01-introduction.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/1.-introduction/public-vs-premium-by-intent.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
