About us
Mixpanel for Voice AI Agents. We map caller journeys. We show developers where and why calls are not succeeding. We show them how to improve their Voice AI agent.
- Website
-
https://voice.canonical.chat
External link for Canonical AI
- Industry
- Technology, Information and Internet
- Company size
- 2-10 employees
- Type
- Privately Held
Updates
-
A Voice AI developer just said to me, "I don't listen to music anymore. I just listen to call recordings with my Voice AI." Sound familiar? We can help you improve your agents without listening to calls manually. So you can get back to listening to the Generationals. See link below on how to get started.
-
We're building Mixpanel for Voice AI. I created a Voice AI agent so I could use our product. Here is what I learned and how I improved the agent. First, some context on the Voice AI Agent. It interviewed founders about their product, their company, or their industry. Then it created LinkedIn and X posts based on the interview. Ok, so here are the top three results. 1. Someone tried to jailbreak my Voice AI. There were only 16 calls, and one of them was an attempted jailbreak! The attempt failed. Good thing I was using Vapi! 2. My voice clone API broke. My agent just stopped speaking. I found the issue because my calls were dropping off at the first stage of the call. I could see it in our call map on our analytics platform. I switched voice clone providers. Issue fixed. 3. I initially designed the AI so it would ask for the callers contact info in the beginning of the call. Callers dropped off at this point. I could see it in our call map. I changed the system prompt so it doesn't ask for contact details upfront. This led to more successful interview completions. If you're building a Voice AI agent and you're not analyzing calls, these same issues may be happening to your agent. Check out what we're building (link in the comments). We'd love to learn more about what you're building and how we can help!
-
+1
-
Exciting News! We just launched our integration with Vapi! We’ve made it even easier for Vapi Voice AI developers to debug and improve their agents. We’re building Mixpanel for Voice AI agents. We map caller journeys. We show developers where and why calls are not succeeding. We provide audio and conversational metrics. We help them improve their Voice AI agent. Voice Ai is an exciting space! It’s been fun meeting Voice AI builders, learning about the interesting and fast-growing use cases for modern Voice AI, and being a part of the Voice AI community! If you’re building on Vapi, we’d love if you sent calls to our platform and we get the chance to learn more about what you’re building! See link below.
-
Do you want to improve your Voice AI agent, but you have too many calls to listen to them all? DM Tom Shapland and we'll map your calls for you!
-
It's thrilling to see all the inbound interest in our new Mixpanel for Interactive AI agents platform! I'm seeing an epiphany moment when developers realize they could see the journeys their callers are taking. Suddenly they have visibility into where and why their AI agents are not achieving the call objective. DM me if you'd like to run some calls through our pipeline.
-
We're thrilled to launch our new product! Voice AI developers listen to call samples to improve their AI agents. It's like drinking from a fire hose. With our new product, you see what’s going on in your all your calls so you improve your agent performance. Here's more background. We were building a semantic cache to reduce latency and cost for LLM apps. Our first target market was Voice AI developers because latency is critical for the user experience. We talked to hundreds of people building voice or multimodal interactive AI agents. Again and again, we heard, “Sure, it’d be great to lower latency and cost, but we’re not there yet. We just need to get the AI agent to do what it’s supposed to. And we don’t know where it’s failing.” So we decided to pivot and build Mixpanel for voice and multimodal interactive AI agents. We give developers a map of user journeys. We provide insights into why calls are not reaching their objectives. And we give developers both conversational metrics (i.e., interruptions count) and technical metrics (i.e., latency) on individual calls. Check it out here: https://lnkd.in/g-sQBZJQ
I'm thrilled to launch our new product! Voice AI developers listen to call samples to improve their AI agents. It's like drinking from a fire hose. With our new product, you see what’s going on in your all your calls so you improve your agent performance. Here's more background. We were building a semantic cache to reduce latency and cost for LLM apps. Our first target market was Voice AI developers because latency is critical for the user experience. We talked to hundreds of people building voice or multimodal interactive AI agents. Again and again, we heard, “Sure, it’d be great to lower latency and cost, but we’re not there yet. We just need to get the AI agent to do what it’s supposed to. And we don’t know where it’s failing.” So we decided to pivot and build Mixpanel for voice and multimodal interactive AI agents. We give developers a map of user journeys. We provide insights into why calls are not reaching their objectives. And we give developers both conversational metrics (i.e., interruptions count) and technical metrics (i.e., latency) on individual calls. Check it out here:
Voice AI Call Quality Analysis
https://www.youtube.com/