Skip to main content
This guide walks you through installing the SDK, writing your first intelligent agent, and running it.

Prerequisites

OpenAI API Key required. Set it as an environment variable before running your agent:
export OPENAI_API_KEY="your-key-here"

Installation

pip install smallestai

Write Your First Agent

Create two files: one for the agent logic, and one to run the application.
1

Create my_agent.py

Subclass OutputAgentNode and implement generate_response() to stream LLM output.
my_agent.py
import os
from smallestai.atoms.agent.nodes import OutputAgentNode
from smallestai.atoms.agent.clients.openai import OpenAIClient

class MyAgent(OutputAgentNode):
    def __init__(self):
        super().__init__(name="my-agent")
        self.llm = OpenAIClient(
            model="gpt-4o-mini",
            api_key=os.getenv("OPENAI_API_KEY")
        )

    async def generate_response(self):
        async for chunk in await self.llm.chat(self.context.messages, stream=True):
            if chunk.content:
                yield chunk.content
2

Create main.py

Wire up AtomsApp with a setup_handler that adds your agent to the session.
main.py
from smallestai.atoms.agent.server import AtomsApp
from smallestai.atoms.agent.session import AgentSession
from my_agent import MyAgent

async def on_start(session: AgentSession):
    session.add_node(MyAgent())
    await session.start()
    await session.wait_until_complete()

if __name__ == "__main__":
    app = AtomsApp(setup_handler=on_start)
    app.run()
Your entry point can be named anything (app.py, run.py, etc.). When deploying, specify it with --entry-point your_file.py.

Run Your Agent

Once your files are ready, you have two options:
For development and testing, run the file directly:
python main.py
This starts a WebSocket server on localhost:8080. In a separate terminal, connect to it:
smallestai agent chat
No account or deployment needed.

What’s Next?