How to Build Custom Voice Agents with ElevenLabs and ChatBotKit
Voice agents are one of the most compelling applications of conversational AI, but building a truly capable one usually means stitching together speech-to-text, language models, text-to-speech, and custom tool integrations by hand. With ChatBotKit and ElevenLabs you can skip the plumbing and focus on what your agent actually does.
In this tutorial you will connect ChatBotKit's skill system to ElevenLabs Conversational AI through the MCP Server integration. ChatBotKit handles the tool logic - API calls, data lookups, multi-step workflows - and ElevenLabs handles the voice layer with low-latency speech synthesis across thousands of voices and dozens of languages. The result is a voice agent that can do far more than answer questions: it can take actions, query live data, and orchestrate complex tasks, all through natural spoken conversation.
What You'll Learn
- How to design a ChatBotKit blueprint with custom abilities for a voice agent
- How to expose your skillset as an MCP server that any external client can connect to
- How to register that MCP server in ElevenLabs and attach it to a voice agent
- How to use advanced tool call patterns including parameterized abilities, chained actions, and conditional logic
Prerequisites
- A ChatBotKit account
- An ElevenLabs account with access to Conversational AI (ElevenAgents)
- Basic familiarity with the ChatBotKit Blueprint Designer
- Approximately 30 minutes
Architecture Overview
The integration works in two layers:
- ChatBotKit - you build a skillset containing abilities (tool definitions). You then create an MCPServer integration that exposes that skillset as a standards-compliant MCP endpoint. This gives you a URL and access token.
- ElevenLabs - you register that URL as a Custom MCP Server under Integrations. You then add the server to your voice agent's tool configuration. When a user speaks to the agent and the LLM decides a tool call is needed, ElevenLabs calls your ChatBotKit MCP endpoint, executes the ability, and speaks the result back to the user.
This architecture means you can iterate on your tool logic entirely within ChatBotKit - adding abilities, updating parameters, connecting data sources - without touching the ElevenLabs configuration. The MCP server reflects changes in real time.
Step 1: Create the Blueprint in ChatBotKit
Start by building a blueprint with a skillset that contains the abilities your voice agent will use. For this tutorial we will create a customer support agent that can manage Zendesk tickets, look up Stripe billing details, schedule follow-ups in Google Calendar, and send confirmation emails through SendGrid.
- Navigate to Blueprints in your ChatBotKit dashboard
- Click Create and select Blueprint
- Name it "Voice Support Agent"
- Click Design to open the Blueprint Designer
Add a Bot, a Skillset, and the abilities shown below:
This blueprint uses five real ability templates from the ChatBotKit catalogue:
- zendesk/ticket/search and zendesk/ticket/create for support ticket management
- stripe/customer/fetch for billing lookups
- google/calendar/event/create for scheduling follow-up calls
- sendgrid/email/send for confirmation emails
The mcpserverIntegration resource at the bottom creates the MCP server endpoint as part of the blueprint. When you build this blueprint, ChatBotKit will generate the MCP server URL and access token automatically.
(1) The Bot resource can also be used independently of ElevenLabs. Open the bot in the ChatBotKit Collabo chat interface to test your abilities, debug tool responses, and iterate on the backstory before connecting it to a voice agent. This is the fastest way to verify that your tools work correctly.
(2) The MCPServer Integration is what ElevenLabs connects to. After building the blueprint, open this integration in your dashboard and copy the Server URL and Access Token - you will need both in the next steps to register the server in ElevenLabs.
Save and build your blueprint.
Step 2: Copy the MCP Server URL and Access Token
Because the blueprint includes an mcpserverIntegration resource, the MCP server was created automatically when you built the blueprint. Now you just need to grab the connection details.
- Go to Integrations in your ChatBotKit dashboard
- Find the ElevenLabs Voice Agent MCP server integration that was created by the blueprint
- Click on it to open the details page
You will see two critical pieces of information:
- Server URL - the MCP endpoint URL (e.g.
https://api.chatbotkit.com/v1/integration/mcpserver/<id>/mcp) - Access Token - a bearer token for authenticating requests
Copy both values and store them securely. You will need them in the next step.
Tip: You can also create MCPServer integrations manually from the Integrations page if you prefer not to include them in the blueprint. You can create multiple integrations pointing to different skillsets, letting you expose different sets of tools to different voice agents or platforms.
Step 3: Register the MCP Server in ElevenLabs
Now switch to ElevenLabs to register your ChatBotKit MCP server.
- Log in to your ElevenLabs dashboard
- Navigate to Conversational AI in the sidebar
- Go to Integrations and click Add Custom MCP Server
-
Fill in the MCP server details:
- Name: "ChatBotKit Support Tools"
- Description: "Customer support tools powered by ChatBotKit abilities"
- Server URL: Paste the MCP endpoint URL you copied from ChatBotKit
- Secret Token: Paste the access token from your ChatBotKit MCPServer integration in the Authorization header field
-
Click Add Integration
ElevenLabs will connect to your MCP server and discover the available tools. You should see your five abilities listed: "Search Support Tickets", "Create Support Ticket", "Look Up Customer Billing", "Schedule Follow-Up Call", and "Send Confirmation Email".
Note: ElevenLabs supports both SSE (Server-Sent Events) and HTTP streamable transport. ChatBotKit's MCP server is compatible with both.
Step 4: Create the Voice Agent in ElevenLabs
With the MCP server registered, create a voice agent that uses your ChatBotKit tools.
-
In ElevenLabs, go to Conversational AI and click Create Agent
-
Choose a blank template or start from scratch
-
Configure the agent basics:
- Name: "Support Voice Agent"
- Voice: Pick a voice that fits your brand (ElevenLabs offers 5,000+ voices across 31 languages)
- Language Model: Select the model you want ElevenLabs to use for the conversation
-
Write a system prompt that guides the voice agent. For example:
You are a helpful customer support agent. You can search and create support tickets, look up customer billing details, schedule follow-up calls, and send confirmation emails. Keep your responses short and natural since this is a voice conversation. Always confirm details before taking actions.
-
Under Tools, click Add Server and select the ChatBotKit Support Tools MCP server you registered in the previous step
-
Configure the Tool Approval Mode. ElevenLabs offers three options:
- Always Ask - the agent asks the user for permission before each tool call (most secure)
- Fine-Grained Tool Approval - you control which tools run automatically and which need approval
- No Approval - tools run automatically (fastest experience)
For a voice agent, Fine-Grained Tool Approval is often the best choice. You might allow "Search Support Tickets" and "Look Up Customer Billing" to run automatically while requiring approval for "Create Support Ticket" or "Send Confirmation Email" since those modify data.
-
Save your agent
Step 5: Test the Voice Agent
- Open your agent in ElevenLabs and click Test to start a voice conversation
- Try speaking these prompts:
- "Can you look up any open tickets for ?"
- "What's the billing status for customer cus_abc123?"
- "Schedule a follow-up call for tomorrow at 2pm"
- "Send Jane a confirmation email summarizing what we discussed"
- Watch how the agent calls your ChatBotKit abilities through MCP and speaks the results back to you
The conversation flow looks like this:
- You speak to the agent
- ElevenLabs transcribes your speech and sends it to the LLM
- The LLM decides a tool call is needed and selects the appropriate ChatBotKit ability
- ElevenLabs calls the ChatBotKit MCP server with the tool parameters
- ChatBotKit executes the ability and returns the result
- The LLM incorporates the result into a response
- ElevenLabs synthesizes the response as speech
All of this happens in real time with low latency, creating a natural conversational experience.
Going Further: Advanced Patterns
The real power of this integration comes from ChatBotKit's flexible ability system. Here are some advanced patterns you can implement.
Multi-Step Tool Chains
Design abilities that feed into each other. The blueprint we built already demonstrates this - the LLM can search for an existing ticket, then create a new one if nothing is found, look up the customer's billing status, schedule a follow-up, and send a confirmation email. The LLM chains these naturally without any orchestration logic on your part.
You can extend this further by adding more abilities. For example, add ticket update and availability check abilities so the agent can modify existing tickets and find open calendar slots:
With these additions, a single voice conversation might flow like this: the agent searches for an existing ticket, updates it with the new issue, checks available time slots, schedules a follow-up at the customer's preferred time, and sends a confirmation email - all through natural spoken dialogue.
Expanding with More Catalogue Templates
ChatBotKit's catalogue includes 100+ pre-built ability templates you can add to your skillset. Some particularly useful ones for voice agents:
- hubspot/contact/fetch and hubspot/contact/update - pull up and update CRM records during a call
- slack/message/send - notify your team in Slack when a high-priority issue is raised
- twilio/lookup/phone - validate and enrich phone numbers the customer provides
- google/calendar/availability/book - book a time slot directly from the availability list
- cbk.search/web - search the web for product information or documentation
- notion/search - query your internal knowledge base for answers
Add any of these to your skillset in the Blueprint Designer, and they automatically appear as tools in ElevenLabs through the MCP connection. No changes needed on the ElevenLabs side.
Notifying Your Team in Real Time
A powerful pattern for voice support is to notify your team when something important happens during a call. Add a Slack notification ability so the agent can alert a channel when it escalates an issue:
Conversation-Aware Responses
ChatBotKit abilities return structured data that the LLM naturally converts into spoken responses. For instance, the stripe/customer/fetch ability might return:
The LLM will naturally convert this into a spoken response like: "I can see your account, Jane. You're on the Pro Monthly plan and your subscription is active with no outstanding balance."
Deploying Your Voice Agent
Once testing is complete, you can deploy your ElevenLabs voice agent through several channels:
- Website widget - embed the ElevenLabs voice widget on your site
- Phone (SIP trunk) - connect to your existing telephony system
- Phone (Twilio) - use ElevenLabs' native Twilio integration for inbound/outbound calls
- Mobile apps - use the iOS (Swift), Android (Kotlin), or React Native SDKs
- Custom WebSocket - build a fully custom integration using the WebSocket API
All these deployment options use the same underlying agent configuration and ChatBotKit MCP tools. Update your abilities in ChatBotKit and every deployment channel reflects the changes immediately.
Troubleshooting
Tools are not appearing in ElevenLabs
- Verify the MCP Server URL is correct in the ElevenLabs integration settings
- Check that the access token is pasted into the Secret Token (Authorization header) field
- Ensure the ChatBotKit MCPServer integration is pointing to the correct skillset
- Try removing and re-adding the MCP server in ElevenLabs to force a re-discovery
Tool calls fail during voice conversation
- Test the abilities directly in ChatBotKit first using the Collabo chat interface
- Check that ability parameter names match what the LLM is passing
- Verify any external API endpoints referenced in your abilities are online and responding
- Review the ChatBotKit integration event log for error details
Voice agent responds slowly during tool calls
- Ensure your external APIs respond quickly (under 2-3 seconds is ideal for voice)
- Add a conversational filler in the ElevenLabs system prompt (e.g. "Say 'one moment' before making tool calls")
- Consider using ChatBotKit's caching capabilities for frequently accessed data
- Simplify ability responses to return only the data the LLM needs
Authentication errors
- Regenerate the access token in ChatBotKit and update it in ElevenLabs
- Make sure the token is entered as the Secret Token, not in the Server URL field
- Check that your ChatBotKit account and MCPServer integration are both active
Next Steps
- Explore the full MCP Server Integration feature page for advanced configuration options
- Learn how to connect any MCP server to your AI agent for the reverse pattern - bringing external tools into ChatBotKit
- Read the ElevenLabs MCP documentation for more on tool approval modes and security
- Try adding shell execution abilities to give your voice agent the power to run code
- Build a multi-agent system where your voice agent delegates to specialized sub-agents