Sep 2, 2025 | Read time 3 min

Introducing real-time, speaker-aware Voice Agents with LiveKit + Speechmatics

Build real-time, speaker-aware Voice Agents with LiveKit + Speechmatics
Livekit and Speechmatics partnership
Anthony Perera
Anthony PereraProduct Marketing Manager

We’re excited to announce our partnership with LiveKit, the open-source framework for building real-time, multimodal AI agents that can see, hear, and speak.

Voice agents live or die by the quality of their ears. If the input fails, everything downstream does too. With this integration, LiveKit’s 100,000-strong developer community now has access to Speechmatics’ industry-leading speech recognition - real-time, accurate, and inclusive by design. 

At the core: real-time speaker diarization

For the first time, Speechmatics brings speaker diarization to LiveKit agents - enabling them to understand not just what was said, but who said it.

That means developers can build voice AI that doesn’t just hear conversations - it understands them.

Why “who said what” matters

Most voice agents today are built for one-to-one interactions: one person, one device. But real conversations are rarely that simple. People interrupt. Voices overlap. Different speakers require different responses.

With Speechmatics speaker diarization built directly into LiveKit, agents gain speaker-level awareness in real time:

  • Personalised responses: tailor interactions to the right individual

  • Smart workflows: escalate when a supervisor joins, unlock when an authorized user speaks, or adapt tone for VIPs.

  • Interruption handling: respond to the right person, even when voices overlap.

From access-controlled devices to drive-thrus and kiosks to multi-party meetings and customer support, diarization ensures agents always respond to the right person. And unlike other providers, real-time diarization comes built-in, not as an add-on.  

Smarter conversations with custom dictionary

Every business has its own language - product names, acronyms, customer names, or technical jargon. Without recognizing them, conversations fall flat.

Speechmatics solves this with a custom dictionary (also known as key-term prompting), included as standard at no extra cost:

  • Add up to 1,000 custom words (10x more than most providers).

  • Guide the model with “sounds-like” context for tricky names or terms.

  • Ensure your agent speaks your users’ language, every time.

The result: more personalized, impactful conversations, whether you’re handling personalized customer calls, or domain specific consultations and workflows.

Inclusive AI that speaks the world’s languages

Voice AI won’t scale on English alone. The biggest opportunities lie in non-English markets across Asia, the Middle East, Europe, and Latin America – regions where most ASR systems still stumble. With limited labelled training data, competitors often mishear accents, skip words, or fail outright.

Speechmatics has tackled this head-on, developing high-quality language models even in low-data conditions, all in service of our mission to understand every voice. Today, we deliver consistently high accuracy across 55+ languages, redefining what’s possible for global voice AI.

Our pioneering bilingual models, including Spanish–English, Mandarin–English, Malay–English, and Tamil–English, are uniquely designed for fluid multilingual conversations. They capture and transcribe code-switching naturally, without sacrificing accuracy. This empowers global businesses to communicate effortlessly across diverse linguistic environments, breaking down barriers and enabling truly inclusive AI.

Built for real-time from day one

Most speech engines were originally designed for offline transcription, with real-time support bolted on later. Speechmatics took the opposite path. We built for real-time from the start.

That means you don’t have to trade speed for accuracy:

  • Low latency keeps conversations flowing naturally.

  • Higher accuracy (25% better than the next provider in real-world tests) means fewer errors reach your downstream systems.

  • Scalable performance works wherever you need it — cloud, on-prem, or even on-device.

Put simply: Speechmatics gives your agents the best possible input layer - the “ears” they rely on to make sense of the world.

And when you pair that with LiveKit’s flexible agent framework, developers can:

  • Build speaker-aware agents with diarization baked in.

  • Deliver personalized conversations with a custom dictionary.

  • Reach global audiences with multilingual, inclusive speech AI.

Start building today with Speechmatics + LiveKit:

Start building today

Explore the Speechmatics x LiveKit integration and start building world-class AI Agents

Latest Articles

Carousel slide image
Use Cases

The court reporter shortage crisis: data, causes, and what legal teams are doing about it

The court reporter shortage is reshaping litigation. Explore data, causes, and how legal teams are using digital reporting and AI transcription to adapt.

Tom Young
Tom YoungDigital Specialist
Carousel slide image
Use Cases

What Word Error Rate Is Acceptable for Legal Transcription?

Word error rate for legal transcription has no single acceptable threshold. But knowing how accuracy, audio quality, and review obligations connect to real legal risk is what separates a reliable transcript from a costly one.

Tom Young
Tom YoungDigital Specialist
[alt: Bilingual medical model featuring terms related to various health conditions and medications in Arabic and English. Key terms include "Chronic kidney disease," "Heart attack," "Diabetes," and "Insulin," among others, displayed in an organized layout.]
Product

Speechmatics achieves a world first in bilingual Voice AI with new Arabic–English model

Sets a new accuracy bar for real-world code-switching: 35% fewer errors than the closest competitor.

Speechmatics
SpeechmaticsEditorial Team
[alt: Illuminated ancient mud-brick structures stand against a dusk sky, showcasing architectural details and textures. Palm trees are in the foreground, adding to the setting's ambiance. Visually captures a historic site in twilight.]
Product

Your voice agent speaks perfect Arabic. That's the problem.

Most voice AI models are trained on formal Arabic, but real conversations across the Middle East mix dialects and English in ways those systems aren’t built to handle.

Yahia Abaza
Yahia AbazaSenior Product Manger
new blog image header
Technical

How Nvidia Dominates the HuggingFace Leaderboards in This Key Metric

A technical deep-dive into Token Duration Transducers (TDT) — the frame-skipping architecture behind Nvidia's Parakeet models. Covers inference mechanics, training with forward-backward algorithm, and how TDT achieves up to 2.82x faster decoding than standard RNN-T.

Oliver Parish
Oliver Parish Machine Learning Engineer
[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]
Use Cases

Why AI-native EHR platforms will treat speech as core infrastructure in 2026

As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.

Vamsi Edara
Vamsi EdaraFounder and CEO, Edvak EHR