ToolRegistry discovers your tools, generates schemas for the LLM, and executes tool calls when the LLM requests them.
Setup
CallToolRegistry().discover(self) in __init__ to register all @function_tool methods.
discover(self) scans the agent for methods decorated with @function_tool().
Calling the LLM with Tools
Passtool_registry.get_schemas() to give the LLM your tool definitions.
Handling Tool Calls
Collectchunk.tool_calls during streaming, then run tool_registry.execute().
Feeding Results Back
After executing tools, add the results to context and call the LLM again:ToolRegistry API
| Method | Description |
|---|---|
discover(obj) | Scan object for @function_tool methods |
get_schemas() | Return OpenAI-format tool definitions |
execute(tool_calls, parallel=True) | Run tool calls and return results |
Parallel Execution
By default, tools run in parallel when the LLM requests multiple:[!WARNING] If your tools have dependencies—e.g.,get_user_id()returns a value needed byget_user_orders(user_id)—usingparallel=Truewill break because both tools run simultaneously. Useparallel=Falsefor dependent tools.
Tips
Always use parallel=True
Always use parallel=True
Unless your tools have dependencies on each other, parallel execution is faster.
Check for tool_calls before executing
Check for tool_calls before executing
The LLM doesn’t always call tools. Only run the execution code if
tool_calls is non-empty.Log tool calls for debugging
Log tool calls for debugging
Print
tc.name and tc.arguments before execution to debug unexpected behavior.
