# Select (workflow)

Stage 1. Before you instrument or evaluate anything, **select** the model you'll build with. The choice should be evidence-based — leaderboards, benchmark coverage, side-by-side scores against your target capabilities.

## The question this stage answers

*"Which model should I use to build this feature?"*

## What to do

1. **Browse the leaderboard.** Start at the [public leaderboard](/5.-select-pick-the-model/leaderboard.md) for an overview of how frontier and cost-optimized models are ranked across capabilities.
2. **Review the benchmarks catalog.** Open the [benchmarks catalog](/5.-select-pick-the-model/benchmarks-catalog.md) and pick 3–5 benchmarks that match your task — code, math, reasoning, multimodal, multilingual, instruction-following. More benchmarks doesn't mean more signal.
3. **Look at per-benchmark scores.** Each benchmark's page shows every model's score, a top-N leaderboard, and the score-history-over-time. Don't trust a single headline number — look at the dimensions that matter to your task.
4. **Optionally, run a side-by-side comparison.** Use [Compare models](/5.-select-pick-the-model/compare-models.md) to put two or three candidates head-to-head across the benchmarks you care about. Public-catalog scores narrow the shortlist; private evaluations later in the workflow will decide.
5. **Match capability to constraint.** Frontier vs cost-optimized; latency budget; context-window need; licensing constraint; vendor-lock-in tolerance.

## What you'll have at the end

* A shortlist of 1–3 candidate models for the feature
* Documented basis for the choice (which benchmarks, which scores, which trade-offs)
* A baseline expectation of how the candidates should perform on your actual data — useful later when you compare against your private evaluations

## Public-catalog selection vs custom benchmarks

The Stratix Public catalog covers the broad capability surface. **For domain-specific selection** — pricing in your tariff, citing your jurisdiction, summarizing your documents — public benchmarks narrow the field but they won't decide. Premium tier lets you author **custom benchmarks** built from your own data and run the same candidates against them. The selection narrowed by Public gets confirmed by a domain-aligned private benchmark.

## Common patterns

* **Frontier vs cost-optimized** — pick the smallest model that meets your bar on the dimensions that matter
* **Provider against provider** — knowing your second-best protects against vendor lock-in
* **New generation drop** — when a model family upgrades, re-run the same selection exercise to confirm or replace your incumbent
* **Capability-specialized choice** — code task → HumanEval / MBPP weight heavier; long-context task → needle-in-haystack benchmarks weight heavier

## Where to next

* [Build →](/5.-select-pick-the-model/workflow.md)
* [Stratix Public — Leaderboard](/5.-select-pick-the-model/leaderboard.md)
* [Stratix Public — Benchmarks catalog](/5.-select-pick-the-model/benchmarks-catalog.md)
* [Stratix Public — Compare models](/5.-select-pick-the-model/compare-models.md)
* [Stratix Premium — Benchmarks (custom)](/5.-select-pick-the-model/benchmarks.md)
* [Use case: Model evaluation](/4.1-general-use-cases/model-evaluation.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/5.-select-pick-the-model/workflow.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
