# Pattern: visual quality inspection

A manufacturer deploys AI computer vision on assembly lines to detect product defects (scratches, dents, misalignment, color variation, contamination). The model runs at line speed, triggering quality holds when defects are detected. False negatives reach customers as defective product (recall risk). False positives stop the line ($10-50K/hour of lost throughput).

This pattern shows how to evaluate inspection accuracy across the conditions actually present on the factory floor.

## What's at stake

| Risk dimension                               | Magnitude                                      | Framework                           |
| -------------------------------------------- | ---------------------------------------------- | ----------------------------------- |
| Per-recall cost                              | $500K–$5M (industry-dependent)                 | Industry recall-cost benchmarks     |
| Per-hour line-stop cost from false positives | $10K–$50K                                      | Manufacturing throughput economics  |
| Safety-incident exposure for missed defects  | Potential workers'-comp + product-liability    | OSHA + product-liability frameworks |
| ISO 9001 / IATF 16949 audit findings         | Conditional certification, customer audit risk | ISO / IATF audit standards          |

## The evaluation pattern

A **stratified evaluation** treats lighting condition, line speed, and operator shift as first-class dimensions.

1. **Per-defect-class scorer** — precision and recall by defect category. Trace tags include `defect_type`, `lighting_condition`, `line_speed_bucket`, and `shift`; results break out per-tag.
2. **Custom code grader (condition disparity)** — per-class accuracy across `lighting_condition` and `line_speed_bucket` must stay within 5 percentage points of the baseline. Disparities flag as regressions.
3. **Latency scorer** — inspection decisions must complete under the line-cycle-time SLO (commonly 100-500ms p99).
4. **Compare-models** — when a new model variant is proposed, it must close any condition-disparity without degrading per-class accuracy.

**Continuous trace evaluation:** sampled at 1% of inspection decisions, hourly. Threshold alerts route to plant operations and the quality-engineering team.

## Configuration in code

```python
# Python (SDK)
from layerlens import Stratix
client = Stratix()

condition_disparity = client.scorers.create_code(
 name="lighting-line-speed-disparity",
 code="""
buckets = group_by(traces, ['lighting_condition', 'line_speed_bucket'])
acc = {b: per_class_recall(buckets[b]) for b in buckets}
gap = max(acc.values()) - min(acc.values())
result = {'passed': gap <= 0.05, 'per_bucket_recall': acc}
""",
)

latency = client.scorers.create_code(
 name="line-cycle-latency",
 code="result = {'passed': trace.duration_ms <= 500}",
)

trace_eval = client.trace_evaluations.create(
 trace_set={"tags": {"feature": "quality-inspection"}, "sample_rate": 0.01},
 scorers=[condition_disparity.id, latency.id, defect_classifier_id],
 schedule="hourly",
)
```

```typescript
// TypeScript (REST)
const r = await fetch("https://stratix.layerlens.ai/api/v1/trace-evaluations", {
 method: "POST",
 headers: {
 "X-API-Key": process.env.LAYERLENS_STRATIX_API_KEY!,
 "Content-Type": "application/json",
 },
 body: JSON.stringify({
 trace_set: { tags: { feature: "quality-inspection" }, sample_rate: 0.01 },
 scorers: [conditionDisparityId, latencyId, defectClassifierId],
 schedule: "hourly",
 }),
});
```

## What you get

* Lighting and line-speed disparities measured, not absorbed into an aggregate metric.
* Pre- and post-deployment block on a model variant that worsens any condition.
* Per-shift accuracy visibility supports root-cause analysis when defect-escape rates rise.
* Auditor-ready evidence for ISO 9001 / IATF 16949 reviews.

## Stratix capabilities used

* [Custom code graders](https://github.com/LayerLens/gitbook-full/blob/main/08-evaluate/cookbook/custom-code-scorer.md) — condition-disparity computations
* [Compare models](/5.-select-pick-the-model/compare-models.md) — variant selection
* [Trace evaluations](/8.-evaluate-score-the-outputs/trace-evaluations.md) — continuous sampled
* [Notifications](https://github.com/LayerLens/gitbook-full/blob/main/13-reference/sdk-python/notifications.md) — plant-ops routing

## Replicate this

**Get started:** [Concept: Traces and spans](/6.-build-wire-your-code/traces-and-spans.md) — capturing inspection-line traces with the right tags is the foundation for the stratified evaluation here.

* [Industry → Manufacturing](/4.2-industry-use-cases/manufacturing.md)
* [Workflow: Evaluate](/9.-improve-tune-the-system/workflow.md)
* [Concept: Continuous evaluation](/7.-observe-see-whats-happening/continuous-evaluation.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/4.2-industry-use-cases/pattern-7.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
