Skip to main content
A Node is the basic unit of computation in the Atoms graph. Every “agent” or functional component you build is ultimately a Node.

What is a Node?

In the conceptual graph, a Node is a vertex that performs three key actions:
  • Receive: Accept incoming events like user audio, text, or system triggers.
  • Process: Execute custom Python code, business logic, or AI inference.
  • Send: Emit new events to pass control to the rest of the graph.

Abstracted Nodes

To help you get started quickly, we have abstracted two common node patterns for you. You can use these out of the box or build your own custom nodes from scratch.

1. The Base Node (Node)

The Node class is the raw primitive. It gives you full control but assumes nothing. It is perfect for deterministic logic, API calls, or routing decisions. Key Features:
  • Raw Event Access: You get the raw event and decide exactly what to do with it.
  • No Overhead: No LLM context or streaming logic unless you build it.
Use Case: Router, API Fetcher, Database Logger, Analytics Tracker. Override process_event() to handle incoming events.
from smallestai.atoms.agent.nodes import Node

class RouterNode(Node):
    async def process_event(self, event):
        # Deterministic logic
        if "sales" in event.content:
            # Broadcast to children (routing logic handles filtering)
            await self.send_event(event)
        else:
            await self.send_event(event)

2. The Output Agent (OutputAgentNode)

This is the most common node type. It is a full-featured conversational agent designed to interact with Large Language Models (LLMs). Key Features:
  • Auto-Interruption: Automatically handles user interruptions during playback only when the user is speaking.
  • Streaming: Manages the complexity of streaming LLM tokens to the user in real-time.
  • Context Management: Maintains conversation history automatically.
Use Case: The “brain” of your agent—Sales Agent, Support Agent, Triage Agent. Implement generate_response() as an async generator that yields text chunks.
from smallestai.atoms.agent.nodes import OutputAgentNode
from smallestai.atoms.agent.clients.openai import OpenAIClient

class MyAgent(OutputAgentNode):
    def __init__(self):
        super().__init__(name="my_agent")
        # Initialize your own LLM client
        self.llm = OpenAIClient(model="gpt-4o-mini")

    async def generate_response(self):
        # 1. Call your LLM
        # 2. Yield text chunks (the framework handles buffering and events)
        response = await self.llm.chat(
            messages=self.context.messages,
            stream=True
        )
        async for chunk in response:
            if chunk.content:
                yield chunk.content

How to Write a Custom Node

1

Inherit from Node

Create a new class that inherits from Node (or OutputAgentNode).
class LoggerNode(Node):
2

Override process_event

Implement the process_event async method. This is your logic handler.
async def process_event(self, event):
    print(f"LOG: Received event type {event.type}")
3

Propagate Events

Crucial: You must manually send events if you want the flow to continue.
await self.send_event(event) 
Manual Event Propagation In a custom Node, the chain of events stops with you unless you explicitly move it forward. You MUST call await self.send_event(...) if you want the event to continue causing effects in the graph.

Custom Node Examples

"""Logs every event for debugging."""
from loguru import logger
from smallestai.atoms.agent.nodes import Node

class LoggerNode(Node):
    async def process_event(self, event):
        # Log the event
        logger.info(f"[{event.type}] {event}")
        
        # Pass it on
        await self.send_event(event)

Best Practices

Use clear, unique names for debugging. This name shows up in your logs.
# Good
super().__init__(name="sales-router")

# Bad
super().__init__(name="node1")
One node, one responsibility. If you need to filter AND log AND route, chain three small nodes together instead of building one complex node. This makes testing much easier.
Unless you are intentionally building a filter that drops events, always remember to call await self.send_event(event) at the end of your logic.
Don’t let exceptions break the event chain.
async def process_event(self, event):
    try:
        await self.risky_operation()
    except Exception as e:
        logger.error(f"Failed: {e}")
    
    # Still propagate so the call continues
    await self.send_event(event)