Text chat SSE
Text chat uses a normal HTTP POST and streams assistant output through Server-Sent Events.
Create a text session
Section titled “Create a text session”curl -X POST "https://api.hyponema.ai/sessions" \ -H "Authorization: Bearer $HYPONEMA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "agent_id": "agent_...", "user_id": "user_123", "modality": "text" }'Send a message
Section titled “Send a message”curl -N -X POST "https://api.hyponema.ai/sessions/$SESSION_ID/messages" \ -H "Authorization: Bearer $HYPONEMA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "text": "Can you summarize our last conversation?" }'The text field is required and must be between 1 and 8000 characters.
Stream events
Section titled “Stream events”The response emits Server-Sent Events. Full payload shapes live in the Chat API reference; the events you’ll see are:
| Event | When it fires |
|---|---|
turn_start | Assistant turn started. Carries turn_id and the resolved model. |
token | Streamed text chunk. |
tool_call | The model invoked a tool. Includes the streaming-JSON arguments_partial. |
tool_result | Tool returned. Includes the final result the assistant sees. |
guard | A guard fired (no-go zone, escalation, redirect). |
turn_end | Assistant turn completed. Includes latency and token counts. |
error | Turn failed. Carries detail and error_kind. |
Runtime behavior
Section titled “Runtime behavior”The chat runner streams tokens, executes tool calls inline, applies guards, and persists the assistant and tool turns through a fresh unit of work.
Do not build a second chat loop in your application. Treat the SSE stream as the source of assistant output and use traces for debugging.