# Architecture deep dive

LayerLens Stratix is one platform delivered through three customer-facing experiences. This page explains the customer-facing components and how they relate.

For tenant-isolation, encryption, and audit-trail details, see [Security and compliance](/11.-admin/11-admin.md). For deployment options and region selection, contact your account team.

## Components

### Catalog

The catalog is the system of record for **models**, **benchmarks**, **public evaluations**, and **public spaces**. It powers Stratix Public and is the same source the Premium app reads. Models and benchmarks carry rich metadata: provider, context window, modalities, licensing, score history.

### Evaluation engine

The evaluation engine takes a (model, dataset, scoring config) and produces a per-row result. Public evaluations and private evaluations both run through the same engine. The engine supports:

* **Standard evaluations** — model + benchmark + scorer/judge config
* **Compare-models** — two or more models on the same dataset
* **Trace evaluations** — scoring config applied to ingested traces
* **Agentic evaluations** — assertions + deterministic rules + judges combined

### Judge engine

LLM-as-a-judge graders are first-class. The judge engine handles:

* Judge definition and versioning
* Judge invocation (sync and async)
* **GEPA judge optimization** — automatic prompt tuning against labeled ground truth
* System judges shipped with Premium as starting points

### Trace pipeline

Traces are the unit of "what did my AI actually do." The pipeline:

1. Ingests trace data via API or SDK
2. Persists traces with their spans
3. Indexes for trace-evaluation
4. Applies trace evaluations on demand or on a schedule

### Scorers and code graders

**Scorers** are LLM-backed graders (model + prompt) you reuse across benchmarks. **Code graders** are deterministic checks — exact match, regex, JSON-schema validity, statistical fairness, etc. — that run inside the evaluation runtime.

### Learning

Library of curated content (articles, walkthroughs, recipes) and structured Paths. The Learning Portal is exposed in Premium.

### BYOK custom models

OpenAI-compatible endpoints registered as custom models. The evaluation engine treats them like first-class models — you can put them in compare-models, run benchmarks against them, and score traces from them.

### ECU billing

Compute consumed by evaluations, judges, GEPA runs, and trace evaluations bills against an ECU balance. Free tier ships with starter ECU; Premium is pay-as-you-go.

### Authentication

* **OAuth sign-in** (Google, GitHub) and email-and-password for the Premium dashboard
* **API keys** for SDK, CLI, and programmatic access — issued from the Premium UI, revocable, scoped by organization
* **SSO** (SAML / OIDC) on Enterprise tier

### Multi-tenancy

Every record is scoped to the customer's organization. A user can belong to multiple organizations and switch between them from the Premium top bar. Public catalog data is global.

## How a request flows

A Premium evaluation request:

1. The dashboard at `stratix.layerlens.ai` (or an SDK call) sends an authenticated request to the Stratix API.
2. The platform authenticates, scopes the request to the calling organization, and queues the evaluation job.
3. The platform calls the target model (using the customer's BYOK credential or a platform credential as configured), runs the configured scorers and judges, and persists results.
4. The dashboard shows the run progressing and renders results once complete.

A trace ingest:

1. The SDK or CLI sends trace data to the trace endpoint with an API key.
2. The platform validates and persists the trace with its spans.
3. If trace evaluations are configured, the platform runs scorers and judges against the new trace per the configured schedule.

## Three customer-facing experiences sit on this

* **Stratix Public** — read-only browse layer over the catalog and public evaluations
* **Stratix Premium** — full-fidelity workspace UI over every component
* **Python SDK** — programmatic surface, backed by the same REST API the dashboard uses

## API conventions

The REST API at `api.layerlens.ai` follows standard conventions:

* JSON over HTTPS
* Versioned with `/api/v1` prefix; breaking changes bump the major version
* Cursor-based pagination
* Async work returns a handle; clients poll for completion
* Request IDs in every response for traceability
* Standard rate-limit headers

## Where to next

* [Three experiences](/1.-introduction/three-experiences.md) — what each surface does
* [Python SDK reference](/6.-build-wire-your-code/sdk-python.md)
* [API reference](/13.1-sdk-and-apis/api.md)
* [Concepts library](/13.3-concepts-library-+-architecture/concepts-library.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/13.3-concepts-library-+-architecture/architecture.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
