To help you get started quickly, we have abstracted three common node patterns for you. You can use these out of the box or build your own custom nodes from scratch.
The Node class is the raw primitive. It gives you full control but assumes nothing. It is perfect for deterministic logic, API calls, or routing decisions.Key Features:
Raw Event Access: You get the raw event and decide exactly what to do with it.
No Overhead: No LLM context or streaming logic unless you build it.
Use Case: Router, API Fetcher, Database Logger, Analytics Tracker.Override process_event() to handle incoming events.
Copy
from smallestai.atoms.agent.nodes import Nodeclass RouterNode(Node): async def process_event(self, event): # Deterministic logic if "sales" in event.content: # Broadcast to children (routing logic handles filtering) await self.send_event(event) else: await self.send_event(event)
This is the most common node type. It is a full-featured conversational agent designed to interact with Large Language Models (LLMs).Key Features:
Auto-Interruption: Automatically handles user interruptions during playback only when the user is speaking.
Streaming: Manages the complexity of streaming LLM tokens to the user in real-time.
Context Management: Maintains conversation history automatically.
Use Case: The “brain” of your agent—Sales Agent, Support Agent, Triage Agent.Implement generate_response() as an async generator that yields text chunks.
Copy
from smallestai.atoms.agent.nodes import OutputAgentNodefrom smallestai.atoms.agent.clients.openai import OpenAIClientclass MyAgent(OutputAgentNode): def __init__(self): super().__init__(name="my_agent") # Initialize your own LLM client self.llm = OpenAIClient(model="gpt-4o-mini") async def generate_response(self): # 1. Call your LLM # 2. Yield text chunks (the framework handles buffering and events) response = await self.llm.chat( messages=self.context.messages, stream=True ) async for chunk in response: if chunk.content: yield chunk.content
Create a new class that inherits from Node (or OutputAgentNode).
Copy
class LoggerNode(Node):
2
Override process_event
Implement the process_event async method. This is your logic handler.
Copy
async def process_event(self, event): print(f"LOG: Received event type {event.type}")
3
Propagate Events
Crucial: You must manually send events if you want the flow to continue.
Copy
await self.send_event(event)
Manual Event Propagation
In a custom Node, the chain of events stops with you unless you explicitly move it forward. You MUST call await self.send_event(...) if you want the event to continue causing effects in the graph.
One node, one responsibility. If you need to filter AND log AND route, chain three small nodes together instead of building one complex node. This makes testing much easier.
Always propagate events
Unless you are intentionally building a filter that drops events, always remember to call await self.send_event(event) at the end of your logic.
Handle errors gracefully
Don’t let exceptions break the event chain.
Copy
async def process_event(self, event): try: await self.risky_operation() except Exception as e: logger.error(f"Failed: {e}") # Still propagate so the call continues await self.send_event(event)