Skip to main content
The ToolRegistry discovers your tools, generates schemas for the LLM, and executes tool calls when the LLM requests them.

Setup

Call ToolRegistry().discover(self) in __init__ to register all @function_tool methods.
from smallestai.atoms.agent.tools import ToolRegistry, function_tool

class MyAgent(OutputAgentNode):
    def __init__(self):
        super().__init__(name="my-agent")
        self.llm = OpenAIClient(model="gpt-4o-mini")
        
        # Create registry and discover tools
        self.tool_registry = ToolRegistry()
        self.tool_registry.discover(self)
    
    @function_tool()
    def get_weather(self, location: str):
        """Get weather for a location."""
        return {"temp": 72, "conditions": "Sunny"}
discover(self) scans the agent for methods decorated with @function_tool().

Calling the LLM with Tools

Pass tool_registry.get_schemas() to give the LLM your tool definitions.
response = await self.llm.chat(
    messages=self.context.messages,
    stream=True,
    tools=self.tool_registry.get_schemas()
)
The LLM may respond with text, tool calls, or both.

Handling Tool Calls

Collect chunk.tool_calls during streaming, then run tool_registry.execute().
from typing import List
from smallestai.atoms.agent.clients.types import ToolCall

async def generate_response(self):
    response = await self.llm.chat(
        messages=self.context.messages,
        stream=True,
        tools=self.tool_registry.get_schemas()
    )
    
    tool_calls: List[ToolCall] = []
    
    # Stream response and collect tool calls
    async for chunk in response:
        if chunk.content:
            yield chunk.content
        if chunk.tool_calls:
            tool_calls.extend(chunk.tool_calls)
    
    # Execute tools if any were called
    if tool_calls:
        results = await self.tool_registry.execute(
            tool_calls=tool_calls,
            parallel=True
        )
        
        # Add to context and get final response
        # (see below)

Feeding Results Back

After executing tools, add the results to context and call the LLM again:
if tool_calls:
    results = await self.tool_registry.execute(tool_calls=tool_calls, parallel=True)
    
    # Add tool calls and results to context
    self.context.add_messages([
        {
            "role": "assistant",
            "content": "",
            "tool_calls": [
                {
                    "id": tc.id,
                    "type": "function",
                    "function": {
                        "name": tc.name,
                        "arguments": str(tc.arguments),
                    },
                }
                for tc in tool_calls
            ],
        },
        *[
            {"role": "tool", "tool_call_id": tc.id, "content": str(result)}
            for tc, result in zip(tool_calls, results)
        ],
    ])
    
    # Call LLM again with tool results
    final_response = await self.llm.chat(
        messages=self.context.messages,
        stream=True
    )
    
    async for chunk in final_response:
        if chunk.content:
            yield chunk.content

ToolRegistry API

MethodDescription
discover(obj)Scan object for @function_tool methods
get_schemas()Return OpenAI-format tool definitions
execute(tool_calls, parallel=True)Run tool calls and return results

Parallel Execution

By default, tools run in parallel when the LLM requests multiple:
# Parallel (faster)
results = await self.tool_registry.execute(tool_calls=tool_calls, parallel=True)

# Sequential (if dependencies exist)
results = await self.tool_registry.execute(tool_calls=tool_calls, parallel=False)
[!WARNING] If your tools have dependencies—e.g., get_user_id() returns a value needed by get_user_orders(user_id)—using parallel=True will break because both tools run simultaneously. Use parallel=False for dependent tools.

Tips

Unless your tools have dependencies on each other, parallel execution is faster.
The LLM doesn’t always call tools. Only run the execution code if tool_calls is non-empty.
Print tc.name and tc.arguments before execution to debug unexpected behavior.