Prerequisites
OpenAI API Key required. Set it as an environment variable before running your agent:
Installation
Write Your First Agent
Create two files: one for the agent logic, and one to run the application.1
Create my_agent.py
Subclass
OutputAgentNode and implement generate_response() to stream LLM output.my_agent.py
2
Create main.py
Wire up
AtomsApp with a setup_handler that adds your agent to the session.main.py
Run Your Agent
Once your files are ready, you have two options:- Run Locally
- Deploy to Platform
For development and testing, run the file directly:This starts a WebSocket server on No account or deployment needed.
localhost:8080. In a separate terminal, connect to it:
