# Personas and paths

Stratix is read by five kinds of people. Each one has a different first question and a different "ah-ha" moment. Pick your path; we'll get you there fast.

## Builder

> *"I'm shipping an AI feature and I want it to be good."*

You write code. Your name is on the PR that adds the new prompt or the new agent. You want a tight feedback loop: write a change, run an eval, see the score change.

**First questions:** "How do I write my first eval? How do I score outputs my eyes can't grade?"

**Path:**

1. [Sign up for Premium](/2.-get-started/sign-up.md)
2. [Tutorial 1: First evaluation in 10 minutes](/8.-evaluate-score-the-outputs/01-first-evaluation.md)
3. [Tutorial 2: Build your first judge](/8.-evaluate-score-the-outputs/02-first-judge.md)
4. [Workflow: Instrument](/9.-improve-tune-the-system/workflow.md) → [Evaluate](/9.-improve-tune-the-system/workflow.md)
5. [SDK quickstart](/2.-get-started/sdk.md)
6. [Cookbook](/2.-get-started/all-cookbook-recipes.md) — recipes by problem, framework, and integration

**You'll know you're winning when:** every prompt change has an eval result attached and you've stopped trusting your gut.

## Operator

> *"I run live AI in production and I need to keep it healthy."*

You own the on-call. You watch dashboards. You ship rollback scripts. AI quality is your latest reliability surface.

**First questions:** "How do I score real production traffic? How do I get paged when quality drifts?"

**Path:**

1. [Concept: Continuous evaluation](/7.-observe-see-whats-happening/continuous-evaluation.md)
2. [Concept: Traces and spans](/6.-build-wire-your-code/traces-and-spans.md)
3. [Tutorial 4: Score live traces](/8.-evaluate-score-the-outputs/04-score-traces.md)
4. [Workflow: Observe](/9.-improve-tune-the-system/workflow.md) → [Govern](/9.-improve-tune-the-system/workflow.md)
5. [Use case: AI quality gates in CI/CD](/4.1-general-use-cases/ai-quality-gates-cicd.md)
6. [Status and reliability](https://github.com/LayerLens/gitbook-full/blob/main/13-reference/status/README.md)

**You'll know you're winning when:** quality regressions surface to you before they surface to customers.

## Researcher

> *"I want to know which model is best for my use case."*

You don't (yet) need to wire anything up. You need ground truth — what's the frontier, what's commoditized, what's punching above its weight.

**First questions:** "Which model is best at structured-output tasks? How is open-weight catching up?"

**Path:**

1. [Stratix Public](/1.-introduction/01-introduction.md) — start here, no signup
2. [Models catalog](/5.-select-pick-the-model/models-catalog.md)
3. [Benchmarks catalog](/5.-select-pick-the-model/benchmarks-catalog.md)
4. [Compare models](/5.-select-pick-the-model/compare-models.md) — head-to-head
5. [Quarterly reports](/5.-select-pick-the-model/quarterly-reports.md)
6. [Public evaluations](/5.-select-pick-the-model/public-evaluations.md)
7. (Optional) [Sign up for Premium](/2.-get-started/sign-up.md) to run a private evaluation on your own data

**You'll know you're winning when:** you can answer "is X actually better than Y for our task" in under 5 minutes.

## Admin

> *"I'm setting up Stratix for my team."*

You provision seats, manage SSO, watch the bill, write the IT-approval memo.

**First questions:** "How does auth work? How do I manage multiple orgs and seats? What's the data residency story?"

**Path:**

1. [Sign up](/2.-get-started/sign-up.md)
2. [Organizations and multi-org](https://github.com/LayerLens/gitbook-full/blob/main/13-reference/sdk-python/organizations.md)
3. [Team management](/11.-admin/team-management.md)
4. [ECU credits and billing](/11.-admin/ecu-credits-billing.md)
5. [Settings](/11.-admin/settings.md)
6. [Enterprise](/1.-introduction/01-introduction.md) — SSO, data residency,
7. [Security and compliance](/1.-introduction/01-introduction.md)

**You'll know you're winning when:** auth, billing, and access control are boring.

## Buyer

> *"I'm evaluating Stratix for purchase."*

You may not be the user. You're the budget owner, the AI-platform PM, the head of engineering. You need to know whether Stratix is the right bet — and whether your team will adopt it.

**First questions:** "How does Stratix compare to Braintrust/LangSmith/etc? What's the ROI? Who else uses it?"

**Path:**

1. [What is LayerLens Stratix?](/1.-introduction/what-is-layerlens-stratix.md)
2. [Three experiences](/1.-introduction/three-experiences.md)
3. [How Stratix compares](/1.-introduction/how-stratix-compares-deep.md)
4. [Pricing](/1.-introduction/01-introduction.md)
5. [Industry patterns](/4.2-industry-use-cases/travel-hospitality.md)
6. [Use cases](/4.1-general-use-cases/general.md)
7. [Enterprise](/1.-introduction/01-introduction.md)
8. [Security and compliance](/1.-introduction/01-introduction.md)

**You'll know you're winning when:** you have a one-page rationale and your team has run a real evaluation in the trial.

## Where to next

* [Getting started](/2.-get-started/02-get-started.md) — pick a path and ship
* [Workflow](/1.-introduction/the-stratix-workflow.md) — the spine that ties all five personas together


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/1.-introduction/personas-paths.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
