Mar 6, 2026 | Read time 4 min

Speechmatics achieves a world first in bilingual Voice AI with new Arabic–English model

Sets a new accuracy bar for real-world code-switching: 35% fewer errors than the closest competitor.
Arabic-English-multilingual-blog-asset - Header (1200 × 900)
Speechmatics
SpeechmaticsEditorial Team

Powered by Nvidia, the new industry-leading bilingual model includes the world's first Arabic–English bilingual medical model, achieving 6.3% WER on mixed-speech benchmarks and 35% fewer errors than the nearest competitor.

Speechmatics today launched its new Arabic–English bilingual model, a single production-ready model that handles Arabic dialects and English simultaneously. It can be deployed on-premises and on-device, supports speaker diarization and speaker focus, and runs across real-time and batch workflows. 

As part of the rollout, Speechmatics introduces the world's first Arabic–English bilingual medical model: a specialized clinical variant trained on twice the vocabulary of its English Medical Model, built to ensure that patient records are always accurate and up to date.

Code-switching: going beyond monolingual AI

A doctor names a drug in English then switches back to Arabic. A Gulf contact center agent shifts registers without thinking. A finance officer moves across both languages in a single sentence. Across MENA, this is Monday morning.

Monolingual models weren't built for this. When a speaker shifts between Arabic and English mid-sentence, the model loses the thread - misattributing words, dropping terminology, or simply getting it wrong. In a contact center or a clinical setting, that's not an edge case. It's the norm.

Our model, tailored to support both languages, resolves this issue, with speaker diarization and speaker focus ensuring every word is attributed to the right person throughout. In new benchmarking, Speechmatics achieves a 35% lower Word Error Rate than Google on Arabic–English code-switching tasks (6.3% vs 9.7%), making it the most accurate code-switching model available

Dialect coverage that clears the field

Arabic carries distinct vocabulary, phonology, and rhythm across the region, including Gulf, Egyptian and Levantine dialects. Models trained on broadcast Modern Standard Arabic struggle the moment a real conversation starts.

Speechmatics leads major providers on Arabic-only transcription, delivering 24% lower Word Error Rate than Google (4.5% vs 5.9%) and outperforming OpenAI Whisper, AssemblyAI, Deepgram, Amazon, and Microsoft.

Arabic-English new blog image WER

Built for enterprise deployment

Data sovereignty is a hard requirement across MENA, with regional data protection legislation in Saudi Arabia, the UAE, and beyond placing strict obligations on where voice data is processed and stored. Speechmatics meets this directly.

"This was critical to achieving meaningful outcomes for customers across the region who kept describing the same challenge. In a Cairo hospital or a Riyadh contact center, Arabic and English flow concurrently - the drug name arrives in English, the rest of the sentence is Arabic. Delivering significant impact meant removing that friction from voice interactions. We trained on real voices, real dialects and real clinical vocabulary - because that’s the only way to build something that truly works where it’s used.”

Katy Wigdahl CEO, Speechmatics

The model deploys across cloud SaaS, on-premises, and on-device, powered by NVIDIA AI infrastructure and optimized through NVIDIA Dynamo-Triton for high-throughput, low-latency processing at scale. Sub-second latency is maintained across all deployment modes.

Real-time streaming and batch transcription run on the same model, removing the accuracy trade-off that typically comes with switching between the two. Speaker diarization, speaker focus, punctuated transcripts, and timestamped outputs are included as standard.

The world's first bilingual medical model

In clinical environments across MENA, English drug names, procedures, and dosages appear constantly inside Arabic speech. Generic models mishandle them, and those errors can land in the patient record.

Trained on twice the vocabulary of Speechmatics' English Medical Model, incorporating both English and Arabic clinical terminology, real dialect variation, and speech from actual clinical settings, the world's first Arabic–English bilingual medical model accurately transcribes ICD-10-CM codes, drug names, dosages, and clinical shorthand regardless of which language carries them. On-premises and on-device deployment make it viable for the regulated environments where clinical AI is increasingly being built across the region.

"We ran extensive evaluations on complex code-switching and dialect-heavy consultations across MENA. Speechmatics' bilingual medical model was the only one that met performance thresholds we require to maintain high-quality clinical documentation as we scale regionally. That alignment made the partnership a strong fit for our expansion."

Patrick Nguyen Head of Engineering, MENA, Sully.ai

Both models are available now in our portal to test for yourself.

Latest Articles

Carousel slide image
Use Cases

What Word Error Rate Is Acceptable for Legal Transcription?

Word error rate for legal transcription has no single acceptable threshold. But knowing how accuracy, audio quality, and review obligations connect to real legal risk is what separates a reliable transcript from a costly one.

Mieke Smith
Mieke SmithSenior Writer
Carousel slide image
Use Cases

The court reporter shortage crisis: data, causes, and what legal teams are doing about it

The court reporter shortage is reshaping litigation. Explore data, causes, and how legal teams are using digital reporting and AI transcription to adapt.

Tom Young
Tom YoungDigital Specialist
[alt: Illuminated ancient mud-brick structures stand against a dusk sky, showcasing architectural details and textures. Palm trees are in the foreground, adding to the setting's ambiance. Visually captures a historic site in twilight.]
Product

Your voice agent speaks perfect Arabic. That's the problem.

Most voice AI models are trained on formal Arabic, but real conversations across the Middle East mix dialects and English in ways those systems aren’t built to handle.

Yahia Abaza
Yahia AbazaSenior Product Manger
new blog image header
Technical

How Nvidia Dominates the HuggingFace Leaderboards in This Key Metric

A technical deep-dive into Token Duration Transducers (TDT) — the frame-skipping architecture behind Nvidia's Parakeet models. Covers inference mechanics, training with forward-backward algorithm, and how TDT achieves up to 2.82x faster decoding than standard RNN-T.

Oliver Parish
Oliver Parish Machine Learning Engineer
[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]
Use Cases

Why AI-native EHR platforms will treat speech as core infrastructure in 2026

As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.

Vamsi Edara
Vamsi EdaraFounder and CEO, Edvak EHR