Skip to main content
A Single Prompt agent runs on one set of instructions. You write a prompt that defines who the agent is, what it knows, and how it should behave — and that prompt governs the entire conversation. The AI interprets your instructions and applies them dynamically, adapting to whatever direction the caller takes.

How It Works

Think of your prompt as a briefing. You’re telling the agent: here’s your role, here’s what you know, here’s how to handle situations. The AI internalizes this once and then uses that understanding for every exchange. When a caller speaks, the agent doesn’t follow a script. It reasons through the conversation based on your instructions. This is why Single Prompt agents feel natural — they’re not jumping between pre-written responses, they’re thinking through each moment.

Capabilities

Dynamic tool usage. You can connect your agent to APIs, databases, and external services. The agent decides when to use them based on the conversation. If a caller asks about their order, the agent can look it up. If they want to book something, it can check availability. Conversation memory. Everything said in the call stays in context. The agent remembers details from earlier in the conversation and can reference them naturally. Handling the unexpected. Without a rigid flow, the agent adapts to topic changes, follow-up questions, and tangents. Real conversations rarely follow a straight line — Single Prompt agents are designed for that reality.

Building a Single Prompt Agent

You’ll create three things: 1. The Prompt This is the core. Your prompt should cover:
  • Identity — Who is this agent? What’s their name, role, personality?
  • Knowledge — What do they know? Products, policies, FAQs, context.
  • Behavior — How should they sound? What’s off-limits? How do they handle edge cases?
  • Endings — When should the call wrap up? When should they transfer?
2. Tools (optional) If you want the agent to take actions — look up records, check calendars, create tickets — you’ll configure the tools it can access and describe when to use them. 3. Voice and Model Pick the voice your agent speaks with and the AI model that powers its reasoning.

The Editor

Once you create a Single Prompt agent, you land in the editor — your workspace for everything.
Single Prompt editor
AreaLocationWhat It Does
Prompt SectionTop barModel, voice, and language selection
Prompt EditorCenterWhere you write your agent’s instructions
Config PanelRight sidebarEnd call, transfer, knowledge base, variables, APIs
NavigationLeft sidebarSwitch between Prompt, Settings, Widget, Integrations
Full editor guide

After You Launch

Once your agent is live, refinement happens in a few places: Prompt updates. You’ll review call logs, find where the agent struggled, and tighten your instructions. Most improvements come from prompt iteration. Voice tuning. Adjust speech speed, add pronunciation rules for tricky words, tweak turn-taking behavior. Tool adjustments. Add new capabilities, modify API connections, or change when tools get triggered. Configuration. Fine-tune end call conditions, transfer settings, timeout behavior, and more.

Get Started

MethodDescription
Start from scratchBlank canvas. Full control over every setting.
Start with TemplatePre-built prompts for common use cases. Customize from there.
Create with AIDescribe what you want, and AI generates the prompt for you.