Skip to main content
An Agent is the core component that powers conversational AI in the Atoms SDK. It’s the “brain” that listens to what users say, thinks about how to respond, and speaks back—all in real-time.

What is an Agent?

In Atoms, an agent is implemented as an OutputAgentNode—a specialized node that handles the complete conversation loop:
  1. Listen — Receives transcribed speech from the user
  2. Think — Processes the input with an LLM to generate a response
  3. Speak — Streams the response as audio back to the user
This happens continuously, creating a natural back-and-forth conversation.

Why Atoms Agents?

FeatureDescription
Real-Time StreamingResponses start playing while the LLM is still generating. No waiting.
Interruption HandlingWhen users speak mid-response, the agent stops and listens.
Context ManagementConversation history maintained automatically.
Tool CallingExecute functions mid-conversation—check databases, call APIs.
Multi-Provider LLMUse OpenAI, Anthropic, or bring your own model.
Production ReadyDeploy with one command. Handle thousands of concurrent calls.

What’s Next