Skip to main content
This page covers common issues and how to fix them.

Agent Not Responding

Symptoms: Agent connects but does not respond to messages. Possible causes and fixes:
CauseFix
generate_response not implementedAdd the generate_response method to your agent class
Not yielding contentMake sure you yield text chunks in generate_response
LLM call failing silentlyAdd error logging around LLM calls
Event not reaching agentCheck graph connections with add_edge
Check your implementation:
async def generate_response(self):
    # This must yield strings
    yield "Hello!"  # Not return, yield

Tool Not Being Called

Symptoms: LLM never calls your tool even when it should. Possible causes and fixes:
CauseFix
Tool not discoveredCall tool_registry.discover(self)
Schemas not passed to LLMAdd tools=self.tool_schemas to chat call
Poor docstringWrite a clear, descriptive docstring
Tool name too vagueUse specific, descriptive names
Check your setup:
def __init__(self):
    super().__init__(name="my-agent")
    
    self.tool_registry = ToolRegistry()
    self.tool_registry.discover(self)  # Must call this
    self.tool_schemas = self.tool_registry.get_schemas()  # Must get schemas

async def generate_response(self):
    response = await self.llm.chat(
        messages=self.context.messages,
        tools=self.tool_schemas,  # Must pass schemas
        stream=True
    )

Audio Not Playing

Symptoms: Agent responds in logs but user hears nothing. Possible causes and fixes:
CauseFix
Server not runningEnsure python agent.py is running
Wrong portCheck server is on expected port (default 8080)
TTS configuration issueCheck voice settings in dashboard
Empty responsesEnsure generate_response yields non-empty strings

Connection Errors

Symptoms: CLI cannot connect to agent. Fixes:
  1. Check server is running:
    python agent.py
    
  2. Verify port:
    # Should show your agent process
    lsof -i :8080
    
  3. Check for port conflicts:
    app = AtomsApp(setup_handler=setup, port=8081)  # Use different port
    

LLM Errors

Symptoms: LLM calls fail with errors. Common errors and fixes:
ErrorCauseFix
401 UnauthorizedInvalid API keyCheck OPENAI_API_KEY environment variable
429 Rate LimitedToo many requestsAdd retry logic or reduce call frequency
500 Server ErrorProvider issueImplement fallback to another provider
TimeoutSlow responseIncrease timeout or use faster model
Add error handling:
async def generate_response(self):
    try:
        response = await self.llm.chat(
            messages=self.context.messages,
            stream=True
        )
        async for chunk in response:
            if chunk.content:
                yield chunk.content
                
    except Exception as e:
        logger.error(f"LLM error: {e}")
        yield "I'm having trouble connecting. One moment."

Session Ending Early

Symptoms: Conversation ends unexpectedly. Possible causes:
CauseFix
Exception in generate_responseAdd try/except around your code
Missing wait_until_completeEnsure setup calls await session.wait_until_complete()
Tool raising exceptionWrap tool logic in try/except
Proper setup pattern:
async def setup(session: AgentSession):
    agent = MyAgent()
    session.add_node(agent)
    
    await session.start()
    await session.wait_until_complete()  # Do not forget this

Slow Response Times

Symptoms: Agent takes too long to respond. Optimization strategies:
StrategyImplementation
Use faster modelSwitch to gpt-4o-mini or claude-3-haiku
Enable streamingAlways use stream=True
Reduce contextLimit conversation history length
Parallel tool callsUse parallel=True in registry.execute
Shorter promptsTrim system prompt to essentials
Limit context length:
async def generate_response(self):
    # Keep only last 10 messages
    messages = self.context.messages[-10:]
    
    response = await self.llm.chat(
        messages=messages,
        stream=True
    )

Memory Issues

Symptoms: Agent uses too much memory or crashes. Fixes:
  1. Clear old context periodically:
    if len(self.context.messages) > 50:
        self.context.messages = self.context.messages[-20:]
    
  2. Do not store large data in instance variables
  3. Clean up resources in stop:
    async def stop(self):
        self.cached_data = None
        await super().stop()
    

Import Errors

Symptoms: Module not found or import errors. Fixes:
  1. Install the package:
    pip install smallestai
    
  2. Check Python version (requires 3.10+):
    python --version
    
  3. Check virtual environment is active:
    source venv/bin/activate
    

Getting Help

If you cannot resolve an issue:
  1. Check the Discord for community help
  2. Search existing GitHub issues
  3. Open a new issue with:
    • Python version
    • SDK version (pip show smallestai)
    • Minimal code to reproduce
    • Full error traceback

Next Steps