# Benchmarks (Premium custom)

{% hint style="info" %}
**Available in Stratix Premium.** This surface is part of the logged-in workspace at [stratix.layerlens.ai](https://stratix.layerlens.ai). Stratix Public users can browse the catalog but cannot use this feature.
{% endhint %}

The Premium Benchmarks page mirrors the public catalog and adds **your private benchmark datasets** — datasets you've uploaded that are scoped to your org.

URL: [`stratix.layerlens.ai/dashboard/benchmarks`](https://stratix.layerlens.ai/dashboard/benchmarks)

## What you can do

* Browse all public benchmarks (52+)
* Browse your org's private benchmarks
* Upload a new private dataset as a benchmark
* View score history per benchmark across all your runs
* Create a new evaluation directly from a benchmark page

## Uploading a private benchmark

A private benchmark has:

* **Name and description** — for your team's context
* **Dataset** — CSV, JSONL, or Parquet with input column + expected-output column (or other schema)
* **Scoring config (optional)** — default scorers/judges to apply when this benchmark is selected for an evaluation

Upload from the Benchmarks page → **Upload private benchmark**.

## Per-benchmark page (private)

For your private benchmarks:

* Dataset preview (first 50 rows)
* Schema (input → expected output mapping)
* Score history of your evaluations against this benchmark
* "Run new evaluation against this benchmark" shortcut

## Where to next

* [Stratix Public — Benchmarks catalog](/5.-select-pick-the-model/benchmarks-catalog.md)
* [Evaluations](/8.-evaluate-score-the-outputs/evaluations.md)
* [Concept: Models and benchmarks](/5.-select-pick-the-model/models-and-benchmarks.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/5.-select-pick-the-model/benchmarks.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
