OpenAI Client
TheOpenAIClient is the primary way to connect to LLMs:
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
model | str | required | Model name (e.g., “gpt-4o”, “gpt-4o-mini”) |
temperature | float | 0.7 | Response randomness (0-2) |
api_key | str | env var | OpenAI API key |
base_url | str | OpenAI | Custom endpoint for BYOM |
max_tokens | int | None | Max response length |
Using the Client
In your agent’sgenerate_response, call the LLM:
Streaming vs Non-Streaming
For real-time voice agents, always use streaming:With Tool Calling
Pass tool schemas when using function calling:Tips
Use gpt-4o-mini for cost efficiency
Use gpt-4o-mini for cost efficiency
For most conversational use cases, gpt-4o-mini provides excellent quality at a fraction of the cost.
Lower temperature for tool-heavy agents
Lower temperature for tool-heavy agents
If your agent uses many tools, lower temperature (0.3-0.5) improves tool selection reliability.
Set system prompt in context
Set system prompt in context
Add a system message to
self.context in __init__ to define agent personality.Voice & STT settings are configured at the platform level when you create or configure your agent in the Atoms dashboard, not in SDK code.

