# Tutorial 3: Wire CI/CD quality gates

**Time:** \~30 minutes **Level:** Intermediate **You'll build:** A GitHub Actions workflow that runs a Stratix evaluation on every PR and blocks merge on regression.

## What you'll learn

* How to call Stratix from a CI runner
* How to compare a run against a baseline
* How to fail a build on regression

## Prerequisites

* [ ] Completed [Tutorials 1 and 2](/2.-get-started/all-tutorials.md)
* [ ] A GitHub repo with an AI feature you can iterate on
* [ ] A Stratix evaluation space you want to gate against
* [ ] A GitHub Actions runner

## Step 1: Get an API key

In Stratix Premium → **Settings → API keys** → **New key**. Scope it to evaluation read/write. Copy the key.

In GitHub: **Settings → Secrets and variables → Actions → New repository secret**. Name: `LAYERLENS_STRATIX_API_KEY`.

## Step 2: Create the CI script

`scripts/ci-eval.py`:

```python
import os
import sys
from layerlens import Stratix

client = Stratix(api_key=os.environ["LAYERLENS_STRATIX_API_KEY"])

# Re-run the gating space
space_id = "your-space-id"
run = client.spaces.run(space_id=space_id)
result = client.spaces.wait_for_completion(run.id, timeout=600)

# Fetch baseline (most recent main-branch run)
baseline = client.spaces.latest_run(space_id=space_id, branch="main")

# Compare
delta = result.score - baseline.score
tolerance = 0.01
print(f"Score: {result.score}")
print(f"Baseline: {baseline.score}")
print(f"Delta: {delta:.4f}")

if delta < -tolerance:
 print("REGRESSION beyond tolerance — failing the build.")
 sys.exit(1)
else:
 print("OK")
```

## Step 3: Add the workflow

`.github/workflows/eval.yml`:

```yaml
name: AI Eval Gate

on:
 pull_request:
 paths:
 - 'src/prompts/**'
 - 'src/agents/**'

jobs:
 eval:
 runs-on: ubuntu-latest
 steps:
 - uses: actions/checkout@v4
 - uses: actions/setup-python@v5
 with:
 python-version: '3.11'
 - name: Install
 run: |
 pip install layerlens --extra-index-url https://sdk.layerlens.ai/package
 - name: Run eval
 run: python scripts/ci-eval.py
 env:
 LAYERLENS_STRATIX_API_KEY: ${{ secrets.LAYERLENS_STRATIX_API_KEY }}
```

## Step 4: Open a test PR

Make a small change to a prompt under `src/prompts/`. Push. Watch the GitHub Action run. Verify the eval kicks off.

## Step 5: Tune the tolerance

Start strict (0.01). Loosen as you learn the natural variance of the evaluation. If you get false-positives on noise, you have two choices:

* Increase tolerance
* Make the eval less noisy (more rows, or a determined seed)

## What's next

* [Tutorial 4: Score live traces](/8.-evaluate-score-the-outputs/04-score-traces.md)
* [Use case: AI quality gates in CI/CD](/4.1-general-use-cases/ai-quality-gates-cicd.md)
* [Workflow: Govern](/6.-build-wire-your-code/workflow.md)
* [Integrations: GitHub Actions](/6.-build-wire-your-code/migration.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/6.-build-wire-your-code/03-cicd-gates.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
