May 2, 2016 | Read time 2 min

Speechmatics unveils Universal Time Alignment system

Header image

Speechmatics releases Universal Time Alignment, our language-independent forced-alignment service to match words in text files to their counterparts in audio files, accurately and automatically delivering improved content discoverability, in any language!

The R&D team at Speechmatics have used their deep learning expertise to create a highly accurate and automated system for aligning audio to text.

By synchronising audio to text, Universal Time Alignment can be used for the creation of closed captions and subtitles, indexing archives and enriching human generated transcripts with extra metadata that would usually be carried out laboriously by hand. In an industry where metadata and searchability is becoming increasingly crucial, time alignment offers a simple and very cost effective way of making audio, video and text searchable across any language. To create Universal Time Alignment we extracted elements from our modular speech recognition technology, re-engineered them for the purpose and added additional alignment specific technology based on our machine learning expertise and experience. As a result we have created a system that is not only robust and accurate, but crucially able to cope with any language in the world.

“Any language in the world” was not quite specific enough as Dr Tom Ash (Director of Speech Recognition at Speechmatics) found out – “when we told the commercial team it would work with foreign languages, we were confident that we would cope with the French and German broadcast and telephony they were intending to throw at the system. However, when they discovered that we had successfully time-aligned 14th century Italian epics and the works of Chinese poet Xu Zhimo, even they were surprised.”

Speechmatics’ Universal Time Alignment system is a game changer in an industry that has long suffered from over-promise and under-delivery. This technology is a big step in helping bring the efficiency savings of these technologies to more difficult audio.

There are still many cases where audio quality is too low for traditional ASR to add value or save time in the workflow. However, for the cases where a human transcript has to be created, time alignment can now be used on the human transcript to add further value, reduce the cost of human time stamping and aid discoverability. We encourage everyone to visit our site at www.speechmatics.com/register to see for themselves how we can help content owners and transcribers extract the most value from their audio and video inventory.

Latest Articles

Carousel slide image
Use Cases

What Word Error Rate Is Acceptable for Legal Transcription?

Word error rate for legal transcription has no single acceptable threshold. But knowing how accuracy, audio quality, and review obligations connect to real legal risk is what separates a reliable transcript from a costly one.

Mieke Smith
Mieke SmithSenior Writer
Carousel slide image
Use Cases

The court reporter shortage crisis: data, causes, and what legal teams are doing about it

The court reporter shortage is reshaping litigation. Explore data, causes, and how legal teams are using digital reporting and AI transcription to adapt.

Tom Young
Tom YoungDigital Specialist
[alt: Bilingual medical model featuring terms related to various health conditions and medications in Arabic and English. Key terms include "Chronic kidney disease," "Heart attack," "Diabetes," and "Insulin," among others, displayed in an organized layout.]
Product

Speechmatics achieves a world first in bilingual Voice AI with new Arabic–English model

Sets a new accuracy bar for real-world code-switching: 35% fewer errors than the closest competitor.

Speechmatics
SpeechmaticsEditorial Team
[alt: Illuminated ancient mud-brick structures stand against a dusk sky, showcasing architectural details and textures. Palm trees are in the foreground, adding to the setting's ambiance. Visually captures a historic site in twilight.]
Product

Your voice agent speaks perfect Arabic. That's the problem.

Most voice AI models are trained on formal Arabic, but real conversations across the Middle East mix dialects and English in ways those systems aren’t built to handle.

Yahia Abaza
Yahia AbazaSenior Product Manger
new blog image header
Technical

How Nvidia Dominates the HuggingFace Leaderboards in This Key Metric

A technical deep-dive into Token Duration Transducers (TDT) — the frame-skipping architecture behind Nvidia's Parakeet models. Covers inference mechanics, training with forward-backward algorithm, and how TDT achieves up to 2.82x faster decoding than standard RNN-T.

Oliver Parish
Oliver Parish Machine Learning Engineer
[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]
Use Cases

Why AI-native EHR platforms will treat speech as core infrastructure in 2026

As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.

Vamsi Edara
Vamsi EdaraFounder and CEO, Edvak EHR