# Compare two models

Putting two models side-by-side is one of the highest-leverage things Stratix does. Use it during model selection, when a new model drops, or when leadership asks "is X better than Y."

## Steps

### 1. Open Compare models

Go to [`stratix.layerlens.ai/compare-models`](https://stratix.layerlens.ai/compare-models).

### 2. Pick the first model

Search and select a model in the left column. The page populates with that model's benchmark scores.

### 3. Pick the second model

Same in the right column. The page now shows both models' scores on every benchmark either has been evaluated against.

### 4. Read the comparison

The score table highlights:

* Where each model wins (green)
* Where they tie (gray)
* Where the gap is large vs small

### 5. Pick a benchmark to focus on

Click any benchmark row to drill into the comparison for that benchmark — including raw scores, sample inputs, and (where available) sample outputs from each model.

### 6. Save the comparison URL

The URL contains both model IDs. Bookmark it or share with your team.

## Verify

You should be able to compare GPT-5.3 vs Claude Opus 4.6 on MMLU and see both models' scores on the same row.

## Common patterns

* **Frontier vs cost-optimized.** Compare a frontier model against a smaller, cheaper one for tasks where you suspect the smaller model is good enough.
* **Provider against provider.** Compare an OpenAI model against an Anthropic model on the benchmark closest to your use case.
* **Old vs new generation.** When a new generation drops, compare it against the predecessor on benchmarks you care about.

## Where to next

* [Browse benchmarks](/2.-get-started/browse-benchmarks.md)
* [Premium — Compare models with your dataset](/2.-get-started/first-evaluation.md)
* [Stratix Public — Compare models reference](/5.-select-pick-the-model/compare-models.md)
* [Use case: Model evaluation](/4.1-general-use-cases/model-evaluation.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/2.-get-started/compare-two-models.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
