We’re excited to announce our partnership with LiveKit, the open-source framework for building real-time, multimodal AI agents that can see, hear, and speak.
Voice agents live or die by the quality of their ears. If the input fails, everything downstream does too. With this integration, LiveKit’s 100,000-strong developer community now has access to Speechmatics’ industry-leading speech recognition - real-time, accurate, and inclusive by design.
At the core: real-time speaker diarization
For the first time, Speechmatics brings speaker diarization to LiveKit agents - enabling them to understand not just what was said, but who said it.
That means developers can build voice AI that doesn’t just hear conversations - it understands them.
Why “who said what” matters
Most voice agents today are built for one-to-one interactions: one person, one device. But real conversations are rarely that simple. People interrupt. Voices overlap. Different speakers require different responses.
With Speechmatics speaker diarization built directly into LiveKit, agents gain speaker-level awareness in real time:
Personalised responses: tailor interactions to the right individual
Smart workflows: escalate when a supervisor joins, unlock when an authorized user speaks, or adapt tone for VIPs.
Interruption handling: respond to the right person, even when voices overlap.
From access-controlled devices to drive-thrus and kiosks to multi-party meetings and customer support, diarization ensures agents always respond to the right person. And unlike other providers, real-time diarization comes built-in, not as an add-on.
Smarter conversations with custom dictionary
Every business has its own language - product names, acronyms, customer names, or technical jargon. Without recognizing them, conversations fall flat.
Speechmatics solves this with a custom dictionary (also known as key-term prompting), included as standard at no extra cost:
Add up to 1,000 custom words (10x more than most providers).
Guide the model with “sounds-like” context for tricky names or terms.
Ensure your agent speaks your users’ language, every time.
The result: more personalized, impactful conversations, whether you’re handling personalized customer calls, or domain specific consultations and workflows.
Inclusive AI that speaks the world’s languages
Voice AI won’t scale on English alone. The biggest opportunities lie in non-English markets across Asia, the Middle East, Europe, and Latin America – regions where most ASR systems still stumble. With limited labelled training data, competitors often mishear accents, skip words, or fail outright.
Speechmatics has tackled this head-on, developing high-quality language models even in low-data conditions, all in service of our mission to understand every voice. Today, we deliver consistently high accuracy across 55+ languages, redefining what’s possible for global voice AI.
Our pioneering bilingual models, including Spanish–English, Mandarin–English, Malay–English, and Tamil–English, are uniquely designed for fluid multilingual conversations. They capture and transcribe code-switching naturally, without sacrificing accuracy. This empowers global businesses to communicate effortlessly across diverse linguistic environments, breaking down barriers and enabling truly inclusive AI.
Built for real-time from day one
Most speech engines were originally designed for offline transcription, with real-time support bolted on later. Speechmatics took the opposite path. We built for real-time from the start.
That means you don’t have to trade speed for accuracy: