Sep 6, 2018 | Read time 2 min

Speechmatics Extends Transcription Offering With Sounds Feature

Header image
Sounds extends the Custom Dictionary feature, which lets users add context-specific words in real-time, such as footballer names or breaking news locations, instantly delivering consistency and reliability of specific words and ultimately increasing the accuracy.

Speechmatics has launched the Sounds feature, a new addition to the company’s current speech-to-text offering. Sounds supports broadcasters by delivering highly accurate transcripts, it allows the speech engine to understand the difference between word pronunciations and the way they are written.

By refining the pronunciation within Custom Dictionary, Sounds can help with the spelling of names, products, acronyms, abbreviations, trademarks, copyrights and alternate word forms.

Ian Firth, VP Products at Speechmatics, explained: “

In the broadcast industry, subtitling for names and words that don’t sound phonetically as they are written is an ongoing bugbear and can be a cause for significant embarrassment by the broadcaster. For example, knowing how to spell Condoleezza Rice’s name flawlessly every time. With Sounds, our engine simply needs pronunciation hints that describe the sounds of the words, like ‘AI’ pronounced as ‘ay eye’. It is not necessary to use phonetic definitions, but something that sounds like the word you want to define and the way that you want it to be written. It can even be used to change things completely, for example you could easily configure it so that ‘Dr’ is written every time ‘Doctor’ is said.”

With other vendors’ offerings, a pronunciation pack is required when using a hints-style feature. With the Speechmatics solution, Sounds is built to enhance the speech engine when required and supporting multiple pronunciations of the same word. As the solution can be delivered in private environments, it is highly secure and accessible only to the broadcaster.

Firth continues,

“Accuracy is still the key metric for speech recognition systems and remains top of mind throughout the development of our solutions at Speechmatics. At Speechmatics, we pride ourselves on unparalleled accuracy rates, and Sounds enables us to continue to improve on our speech-to-text accuracy by building technology that understands pronunciations that are personalised to the user and use case when needed.”

The Sounds feature is available now through Custom Dictionary. To try a demo of the Sounds feature at IBC in Amsterdam 14-18 September, visit Speechmatics at stand 8E17.

Latest Articles

[alt: Bilingual medical model featuring terms related to various health conditions and medications in Arabic and English. Key terms include "Chronic kidney disease," "Heart attack," "Diabetes," and "Insulin," among others, displayed in an organized layout.]
Product

Speechmatics achieves a world first in bilingual Voice AI with new Arabic–English model

Sets a new accuracy bar for real-world code-switching: 35% fewer errors than the closest competitor.

Speechmatics
SpeechmaticsEditorial Team
[alt: Illuminated ancient mud-brick structures stand against a dusk sky, showcasing architectural details and textures. Palm trees are in the foreground, adding to the setting's ambiance. Visually captures a historic site in twilight.]
Product

Your voice agent speaks perfect Arabic. That's the problem.

Most voice AI models are trained on formal Arabic, but real conversations across the Middle East mix dialects and English in ways those systems aren’t built to handle.

Yahia Abaza
Yahia AbazaSenior Product Manger
new blog image header
Technical

How Nvidia Dominates the HuggingFace Leaderboards in This Key Metric

A technical deep-dive into Token Duration Transducers (TDT) — the frame-skipping architecture behind Nvidia's Parakeet models. Covers inference mechanics, training with forward-backward algorithm, and how TDT achieves up to 2.82x faster decoding than standard RNN-T.

Oliver Parish
Oliver Parish Machine Learning Engineer
[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]
Use Cases

Why AI-native EHR platforms will treat speech as core infrastructure in 2026

As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.

Vamsi Edara
Vamsi EdaraFounder and CEO, Edvak EHR
[alt: Logos of Speechmatics and Edvak are displayed side by side, interconnected by a stylized x symbol. The background features soft, wavy lines in light blue, creating a modern and tech-focused aesthetic.]
Company

One word changes everything: Speechmatics and Edvak EHR partner to make voice AI safe for clinical automation at scale

Turning real-time clinical speech into trusted, EHR-native automation.

Speechmatics
SpeechmaticsEditorial Team
[alt: Concentric circles radiate outward from a central orange icon with a white Speechmatics logo. The background is dark blue, enhancing the orange glow. A thin green line runs horizontally across the lower part of the image.]
Technical

Speed you can trust: The STT metrics that matter for voice agents

What “fast” actually means for voice agents — and why Pipecat’s TTFS + semantic accuracy is the clearest benchmark we’ve seen.

Archie McMullan
Archie McMullanSpeechmatics Graduate