Voice AI · Customer Service

LegacyCore + ElevenLabs Integration

The problem

Final-expense insurance is a phone-call business. The buyer is typically a 50-to-85 year-old human who decides on the call, not in an email follow-up. That means voice quality is not a nice-to-have — a robotic tone collapses trust in the first ten seconds and the call ends. Traditional IVR menus and synthesized text-to-speech tools from a decade ago do not survive the first objection. We needed a voice that sounds like a person on a Tuesday afternoon, with realistic pacing, breath, and emotional warmth.

How the integration works

ElevenLabs powers the voice layer for LegacyCore’s customer-service surface and the Conservation Agent that calls clients on missed payments. When a CS lead comes in, our AI Customer Service flow loads the client’s policy context from Supabase, opens an ElevenLabs streaming session, and synthesizes the agent’s response in real time while the speech-to-text pipeline transcribes the caller’s side. The transcript feeds the LLM, the LLM picks the next reply, and ElevenLabs streams it back — round trip is fast enough that callers do not notice the gap.

Every call is recorded, transcribed, and stored against the application record so human agents and managers can replay any moment. The webhook secret on the inbound ElevenLabs callback is verified per request via crypto.timingSafeEqual to prevent unsigned payloads from poisoning the agent thread. We use the canonical voice IDs documented in the LegacyCore vault, with one voice per agent persona (Lexi the pre-qualifier, the Closer, the Conservation specialist) so brand identity is consistent across every touch.

On the production traffic we have measured, transcription accuracy sits above 95% across landline and mobile callers, the cohort skew typical of the final-expense buyer. Latency stays under 500ms first-token on a warm session, which is the threshold at which a synthetic voice starts to feel like a real one.

Why ElevenLabs specifically

We benchmarked the major voice synthesis providers on three things buyers in this demographic actually react to: tonal warmth, pacing variability, and graceful handling of interruptions. ElevenLabs cleared every benchmark by a wide margin and keeps shipping improvements to the underlying model on a pace we have not seen elsewhere. The streaming API also matched the latency budget the AI Closer needs to stay in flow during the actual close moment, where a half-second of dead air ends the deal.

Read the docs

Full ElevenLabs API documentation lives at elevenlabs.io/docs. The streaming endpoint and webhook signature scheme we use are both documented there.

Ready to host a submission node?

The voice AI is one of five agents running in production. Apply to operate a submission node and earn flat fees per issued policy.

Apply Now