# Streaming

Async evaluations and trace evaluations queue and run in the background. The default consumption pattern is **polling** (`wait_for_completion()` in the SDK). For responsive UIs, Stratix supports **server-sent events (SSE)**.

## When to use polling

* CI / batch jobs — runs to completion, then exits
* Single-shot scripts
* Anywhere the caller is comfortable blocking for seconds-to-minutes

## When to use streaming

* Dashboards showing live progress to a user
* Long-running evaluations (large datasets, GEPA optimizations)
* Multi-eval runs where you want incremental results as each completes

## SSE endpoint

```
GET /api/v1/evaluations/{id}/stream
X-API-Key: ll_...
Accept: text/event-stream
```

Events emitted:

| Event          | Payload                                                     |
| -------------- | ----------------------------------------------------------- |
| `status`       | Current status (`queued`, `running`, `completed`, `failed`) |
| `progress`     | `{rows_complete: N, rows_total: M}`                         |
| `row_complete` | Row-level result (one per scored example)                   |
| `final`        | Final result object on completion                           |
| `error`        | Error object on failure                                     |

## Streaming from your backend to the browser

Your backend consumes the SSE from Stratix; pipe events to the browser via your own SSE or WebSocket connection. See [calling Stratix from a backend](https://github.com/LayerLens/gitbook-full/blob/main/06-build/guides/calling-stratix-from-a-backend.md) for the proxy shape.

## SDK helpers

The Python SDK exposes async iteration:

```python
async for event in client.evaluations.stream(eval_id):
 if event.type == "row_complete":
 print(event.row.score)
 elif event.type == "final":
 print("done:", event.result.accuracy)
 break
```

## Limits

* One concurrent stream per eval per consumer
* Streams close after 60 minutes of inactivity (use polling for very long jobs)
* Server-side reconnect is supported; client should resume with `Last-Event-ID` header

## See also

* [Async vs sync workflow](/6.-build-wire-your-code/async-vs-sync-workflow.md)
* [Calling Stratix from a backend](https://github.com/LayerLens/gitbook-full/blob/main/06-build/guides/calling-stratix-from-a-backend.md)
* [SDK: async patterns](https://github.com/LayerLens/gitbook-full/blob/main/13-reference/sdk-python/async-patterns.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.layerlens.ai/6.-build-wire-your-code/streaming.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
