What is the Playground?
The Playground is a testing environment in the Advanced Editor where you can interact with your agent in real-time. It allows you to simulate conversations, test different scenarios, and validate your agent’s behavior before deploying to production.Testing in the Playground saves time and money by catching issues before they affect real customer calls.
Accessing the Playground
The Playground is located in the right panel of the Advanced Editor. Click the “Playground” tab to access it.Call Types
You can simulate two types of calls:| Type | Description | Use Case |
|---|---|---|
| 📞 Inbound | Customer calling your agent | Test welcome flow, support scenarios |
| 📤 Outbound | Agent calling a customer | Test introductions, sales scripts |
Sending Messages
Text Input
Type messages in the input field and press Enter or click Send. The agent will respond as it would in a real call.Voice Input
Click the microphone icon to speak your message. The agent will:- Transcribe your speech
- Process the message
- Respond with synthesized voice
Voice testing helps verify speech recognition accuracy and voice synthesis quality.
Conversation View
The Playground displays messages chronologically:| Element | Description |
|---|---|
| 👤 User Message | Your inputs (text or transcribed voice) |
| 🤖 Agent Message | Agent responses |
| 🔧 Tool Call | When agent triggers a webhook or action |
| ✅ Tool Result | Response from webhook or action |
Audio Playback
For each agent response, you can:- ▶️ Play the audio response
- ⏸️ Pause playback
- 🔁 Replay messages
Context Variables
Test how your agent handles different input parameters:Example Context
Reliability Test
Run multiple iterations of the same conversation to test consistency:Reliability tests help identify edge cases where the agent might behave inconsistently.
Testing Features
Test Tool Calls
When your agent triggers a webhook:- The tool call appears in the conversation
- You can see the parameters being sent
- The response (or mock response) is displayed
Test Evaluations
After a conversation, you can run evaluations to verify they work correctly:- End the conversation naturally
- Click “Run Evaluations”
- Review which evaluations pass or fail
Test Knowledge Base
Verify your agent can find and use knowledge base content:- Ask questions that require KB lookup
- Observe if the agent retrieves correct information
- Check if responses are accurate and relevant
Best Practices
Test Edge Cases
Test Edge Cases
Try unusual inputs, interruptions, and unexpected responses to see how your agent handles them.
Vary Context
Vary Context
Test with different input parameter combinations to ensure personalization works correctly.
Test Both Call Types
Test Both Call Types
Always test both inbound and outbound scenarios.
Check Tool Integrations
Check Tool Integrations
Verify webhooks are called with correct parameters.
Run Reliability Tests
Run Reliability Tests
Periodically run multi-iteration tests to ensure consistency.
Common Testing Scenarios
Happy Path
Test the ideal conversation flow where everything goes as expected.Error Handling
Test what happens when:- User provides invalid information
- Webhooks fail
- User doesn’t respond
Objection Handling
Test how the agent responds to:- “I’m not interested”
- “Call me back later”
- “I need to think about it”
Edge Cases
Test unusual scenarios:- Very short responses
- Long, rambling inputs
- Off-topic questions
- Multiple questions at once
Troubleshooting
| Issue | Solution |
|---|---|
| Agent not responding | Check if LLM model is configured correctly |
| Wrong greeting | Verify call type (inbound/outbound) is set correctly |
| Missing personalization | Ensure context variables are set |
| Webhook not triggering | Check tool descriptions and conditions |
| Slow responses | Try a faster LLM model |

