Skip to main content
The model selection determines which Large Language Model (LLM) processes conversations. Different models offer different capabilities, speeds, and costs.
[IMAGE: Model dropdown expanded showing options]

Location

Prompt Section (top bar) → First dropdown

Available Model Types

TypeDescription
Emotive ModelsSpeech-to-speech with emotional tone and expression (Beta)
Traditional ModelsStandard LLM models for text processing

Available Models

ModelProviderBest For
GPT-4oOpenAIComplex conversations, nuanced understanding
ElectronAtomsSpeed and efficiency
NEEDS PLATFORM INFO: Complete list of available models

How to Select

  1. Click the model dropdown in the Prompt Section
  2. Browse available options
  3. Click to select your choice
The change applies immediately.

Choosing the Right Model

  • Best understanding and reasoning
  • Handles complex queries well
  • Slightly higher latency than smaller models
  • Recommended for support, sales, complex workflows

Emotive Models (Beta)

  • Natural emotional expression
  • More human-like responses
  • Good for conversations requiring empathy
  • Beta — may have quirks

Electron Models

  • Optimized for speed
  • Lower latency
  • Good for simple, high-volume use cases
  • Cost-effective

Considerations

Latency

Larger models take longer to respond. For voice AI, every millisecond matters:
  • Simple queries → smaller models may be sufficient
  • Complex reasoning → larger models worth the tradeoff

Cost

Different models have different per-minute costs:
  • Check your plan details for pricing
  • Higher-capability models typically cost more

Accuracy

More capable models generally:
  • Follow instructions better
  • Handle edge cases more gracefully
  • Provide more nuanced responses

Best Practices

  • Start with GPT-4o for most use cases
  • Test thoroughly when changing models
  • Compare latency in real calls
  • Review conversation logs to verify quality

What’s Next