# Pattern: fair-lending platform

A bank deploys AI in many places — fraud detection, underwriting, customer service, marketing personalization. Each product team owns its own model, but **fair-lending exposure cuts across all of them**. A platform team owns that cross-cutting risk.

This pattern shows how the platform team standardizes fair-lending evaluation across product teams using a single org-scoped Stratix configuration that every team consumes.

## What's at stake

| Risk dimension                                                         | Magnitude                                           | Framework                            |
| ---------------------------------------------------------------------- | --------------------------------------------------- | ------------------------------------ |
| ECOA / fair-lending civil enforcement                                  | Per-violation civil penalties + consent-decree cost | CFPB / DOJ enforcement actions       |
| Per-declined-legitimate-transaction friction                           | \~$25 per false positive                            | Card-network industry benchmarks     |
| Annual US card fraud (the upside the model targets)                    | $4.2B                                               | Federal Reserve / Nilson Report data |
| Brand and customer-trust impact under public consent decree            | Multi-quarter remediation cycles                    | Public consent-decree filings        |
| State DOI / state regulator exposure on insurance and lending products | State-by-state penalty bands                        | State regulator actions              |

## The evaluation pattern

A single **org-scoped Stratix configuration** every product team uses:

1. **Bias scorer (custom code)** — computes per-segment false-positive disparity ratios against protected-class proxies (ZIP code, surname-derived demographics, age band). Flags any segment exceeding the **1.25× threshold** derived from regulatory adverse-impact guidance.
2. **Compare models** — when a product team proposes a new model, the platform team's bias scorer runs against it. Teams pick the variant that minimizes disparity without sacrificing primary-task performance (fraud recall, underwriting accuracy, etc.).
3. **Continuous trace evaluation** — production traffic from each product team is sampled hourly; bias trends are visible to the platform team's dashboard with per-team and per-feature breakdowns.
4. **Shared judge library** — helpfulness, faithfulness, safety judges curated centrally and GEPA-tuned against pooled labels (≥50 per judge, contributed across the 12 product teams); reused everywhere. See [Bootstrap a judge before GEPA](https://github.com/LayerLens/gitbook-full/blob/main/08-evaluate/guides/bootstrap-judges.md) for what to ship before the labels are there.
5. **Per-feature tolerances** — different features have different bars; tolerance configured per evaluation space.

**Onboarding:** new product teams get the bias scorer pre-wired into their CI from day one.

## Configuration in code

```python
# Python (SDK) — shared bias scorer used across product teams
from layerlens import Stratix
client = Stratix()

bias_scorer = client.scorers.create_code(
 name="fair-lending-disparity",
 code="""
groups = group_by_segment(input, output, segments=['zip_tier', 'surname_demographic'])
ratios = {g: false_positive_rate(g) / false_positive_rate('baseline') for g in groups}
result = {'passed': max(ratios.values()) <= 1.25, 'ratios': ratios}
""",
)

# Each product team consumes the shared scorer in their own evaluation
evaluation = client.evaluations.create(
 name="fraud-model-with-bias-check",
 model_id=os.environ["MODEL_ID"],
 dataset_id="fraud-labeled-v3",
 scorers=[bias_scorer.id, primary_recall_scorer_id],
)
result = client.evaluations.wait_for_completion(evaluation.id)
```

```typescript
// TypeScript (REST) — same eval from a CI runner
const r = await fetch("https://stratix.layerlens.ai/api/v1/evaluations", {
 method: "POST",
 headers: {
 "X-API-Key": process.env.LAYERLENS_STRATIX_API_KEY!,
 "Content-Type": "application/json",
 },
 body: JSON.stringify({
 name: "fraud-model-with-bias-check",
 model_id: process.env.MODEL_ID,
 dataset_id: "fraud-labeled-v3",
 scorers: [biasScorerId, primaryRecallScorerId],
 }),
});
```

## What you get

* Bias becomes a measurement, not a discovery. Every product team sees the same scoreboard.
* Cross-team conversations stop being "your team's eval said it was fine" and start being "the org's quality bar didn't move on this PR."
* Regulator-readable evaluation history quantifies disparity and tracks remediation per release.
* Platform team curates \~5 cross-cutting judges; product teams add their own task-specific judges as needed.

## Stratix capabilities used

* [Custom code graders](https://github.com/LayerLens/gitbook-full/blob/main/08-evaluate/cookbook/custom-code-scorer.md) — bias-disparity computations
* [Compare models](/5.-select-pick-the-model/compare-models.md) — variant selection with bias-aware criteria
* [Trace evaluations](/8.-evaluate-score-the-outputs/trace-evaluations.md) — continuous sampled production scoring
* [Multi-tenancy + shared org library](/7.-observe-see-whats-happening/multi-tenancy.md) — judges and scorers reusable across teams
* [Standardize judges across teams](https://github.com/LayerLens/gitbook-full/blob/main/08-evaluate/guides/standardize-judges.md)

## Replicate this

* [Industry → Financial services](/4.2-industry-use-cases/financial-services.md)
* [Workflow: Govern](/9.-improve-tune-the-system/workflow.md)
* [Cookbook: GitHub Actions integration](/6.-build-wire-your-code/integration-github-actions.md)
* [Use case: AI quality gates in CI/CD](/4.1-general-use-cases/ai-quality-gates-cicd.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/4.2-industry-use-cases/pattern-1.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
