Jun 28, 2022 | Read time 4 min

Speechmatics Raises $62m to Understand Every Voice Globally

World’s most accurate and inclusive speech-to-text engine available secures Series B funding of $62m led by Susquehanna Growth Equity.
Speechmatics raises $62m to understand every voice globally
Speechmatics
SpeechmaticsEditorial team

Speechmatics Raises $62m to Understand Every Voice Globally

World’s most accurate and inclusive speech-to-text engine available secures Series B led by Susquehanna Growth Equity

Speechmatics, the leading speech recognition technology scaleup, has raised $62m in Series B funding. Led by Susquehanna Growth Equity with participation from existing investors AlbionVC and IQ Capital, the funding will be used to support Speechmatics’ vision to be a world-leading speech platform, understanding every voice with human-level accuracy.

Global experts in deep learning and speech recognition, Speechmatics has built the most accurate and inclusive speech-to-text engine available. Its aim is to understand every voice regardless of the speakers’ demographic, age, gender, accent, dialect, or location. Historically, training data had to be manually tagged, classified or ‘labelled’ and therefore acceptable accuracy was only viable for a narrow set of the most commercially valuable speakers. Following a major breakthrough in 2021, the Speechmatics speech-to-text engine is now trained through exposure to hundreds of thousands of individual voices using millions of hours of unlabelled, more representative voice data that doesn’t require human intervention. This has enabled a paradigm shift in accuracy, particularly for non-English languages, dramatically reducing both AI bias and errors in speech recognition.*

The Speechmatics engine already understands 34 languages for live and pre-recorded media. Industry-leading features include advanced punctuation and entity formatting (formatting of numbers, currencies, and addresses). Speechmatics works with customers and partners in any industry and with any use case – on-premises, via the cloud, and hybrid. Organisations ranging from 3Play Media, Veritone, Deloitte UK, and Vonage to government departments across the world already take advantage of its accuracy.

Speechmatics is headquartered in Cambridge, UK with offices in London, Boston, USA, Chennai, India and Brno, in the Czech Republic. The investment will be used to support global expansion across the United States and Asia-Pacific, improving infrastructure – particularly data centre capacity and adding further firepower to its research and development to deliver even more accurate understanding of even more voices.

Katy Wigdahl, CEO, Speechmatics, said, “Our progress in the last few years left us inundated with interest from investors for our Series B fundraise. We chose to work with Susquehanna Growth Equity because they have a fantastic global footprint and a history of supporting significant growth in software businesses. The team is smart and ambitious but also pragmatic which is critical in this environment.

The patient capital will enable us to double down on our vision to close the gap between humanity and machines, which is incredibly exciting. The Speechmatics team is hugely ambitious. We have a real heritage in speech technology combined with some of the world’s most talented speech and machine learning experts. We cannot wait to accelerate our growth and unlock the understanding of more and more voices.”

Jonathan Klahr, Managing Director of Susquehanna Growth Equity, said, “The Speechmatics team are undoubtedly a different pedigree of technologists. We started tracking Speechmatics when our portfolio companies told us that again and again Speechmatics win on accuracy against all the other options including those coming from ‘Big Tech’ players. We are primed to work with the team to ensure that more companies can get exposed to and adopt this superior technology."

Robert Whitby-Smith, Partner at AlbionVC, said, “Our view is voice will become the increasingly dominant human-machine interface and Speechmatics are the category leaders in applying deep learning to speech, with category defining accuracy and understanding across industry use-case and requirements. We have witnessed the impressive growth of the team and product over the last few years since our Series A investment in 2019 and as responsible investors we are delighted to support the company’s inclusive mission to understand every voice globally.”

Jonathan Klahr, Managing Director of Susquehanna Growth Equity will join the Speechmatics board.

*Based on datasets used in Stanford’s ‘Racial Disparities in Speech Recognition’ study, Speechmatics recorded an overall accuracy of 82.8% for African American voices compared to Google (68.6%) and Amazon (68.6). This level of accuracy equates to a 45% reduction in speech recognition errors – the equivalent of three words in an average sentence.

Additional quote:

Ed Stacey, Managing Partner at IQ Capital said, “Since investing in the seed round we’ve supported Speechmatics to become a world-leading company that is revolutionising how speech technology is being used - continuously driving AI and machine learning breakthroughs whilst rolling out a product that simply works in every industry use case in which it’s been tried. We’re delighted to continue backing this very talented team as part of the new investment round.”

Latest Articles

[alt: Bilingual medical model featuring terms related to various health conditions and medications in Arabic and English. Key terms include "Chronic kidney disease," "Heart attack," "Diabetes," and "Insulin," among others, displayed in an organized layout.]
Product

Speechmatics achieves a world first in bilingual Voice AI with new Arabic–English model

Sets a new accuracy bar for real-world code-switching: 35% fewer errors than the closest competitor.

Speechmatics
SpeechmaticsEditorial Team
[alt: Illuminated ancient mud-brick structures stand against a dusk sky, showcasing architectural details and textures. Palm trees are in the foreground, adding to the setting's ambiance. Visually captures a historic site in twilight.]
Product

Your voice agent speaks perfect Arabic. That's the problem.

Most voice AI models are trained on formal Arabic, but real conversations across the Middle East mix dialects and English in ways those systems aren’t built to handle.

Yahia Abaza
Yahia AbazaSenior Product Manger
new blog image header
Technical

How Nvidia Dominates the HuggingFace Leaderboards in This Key Metric

A technical deep-dive into Token Duration Transducers (TDT) — the frame-skipping architecture behind Nvidia's Parakeet models. Covers inference mechanics, training with forward-backward algorithm, and how TDT achieves up to 2.82x faster decoding than standard RNN-T.

Oliver Parish
Oliver Parish Machine Learning Engineer
[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]
Use Cases

Why AI-native EHR platforms will treat speech as core infrastructure in 2026

As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.

Vamsi Edara
Vamsi EdaraFounder and CEO, Edvak EHR
[alt: Logos of Speechmatics and Edvak are displayed side by side, interconnected by a stylized x symbol. The background features soft, wavy lines in light blue, creating a modern and tech-focused aesthetic.]
Company

One word changes everything: Speechmatics and Edvak EHR partner to make voice AI safe for clinical automation at scale

Turning real-time clinical speech into trusted, EHR-native automation.

Speechmatics
SpeechmaticsEditorial Team
[alt: Concentric circles radiate outward from a central orange icon with a white Speechmatics logo. The background is dark blue, enhancing the orange glow. A thin green line runs horizontally across the lower part of the image.]
Technical

Speed you can trust: The STT metrics that matter for voice agents

What “fast” actually means for voice agents — and why Pipecat’s TTFS + semantic accuracy is the clearest benchmark we’ve seen.

Archie McMullan
Archie McMullanSpeechmatics Graduate