Jun 22, 2022 | Read time 4 min

Accurate Speech-to-Text Is AI’s Solution to Vehicle Safety

Modern society means vehicles need to safely allow the driver to operate a smart-phone or tablet handsfree. Accurate speech-to-text, such as our ASR, is vital to road safety.
Accurate Speech-to-Text Is AI’s Solution to Vehicle Safety
Benedetta Cevoli
Benedetta CevoliSenior Machine Learning Engineer

In 2003, the UK government banned the use of hand-held mobile phones and other hand-held devices while driving a vehicle. In March 2022, they went a step further, banning drivers from taking photos or videos, and even scrolling through playlists or playing games. To use your phone, you need to be stationary with the keys removed from the ignition. Either that or an accurate speech-to-text system.

In our increasingly plugged-in society, dangerous distractions are available at your fingertips. 1 in every 4 car accidents in the US is caused by texting and driving, so, to avoid increased accidents, manufacturers are adapting to the times by using the latest speech-to-text technology in their vehicles.

Accurate voice recognition is necessary in any modern vehicle. A competent system should enable the driver to change a song, find a route/destination, set a service appointment, send emails/messages, and even set calendar appointments. In the future, we might be using our voice to direct self-driving cars, for example, by simply saying “drive me to London Paddington”.

Car environments are notoriously noisy. The level of noisiness can vary drastically depending on a multitude of driving conditions, such as the vehicle speed, road surface, engine revs as well as atmospheric conditions such as wind and rain. Such noisy environments are a perfect test for our aim to understand every voice in every situation. That’s why we set out to learn how our Autonomous Speech Recognition (ASR) fares while being used in vehicles.

Speechmatics Overtakes the Opposition

To do so, we used the Audio-Visual Speech Recognition in a Car (AVICAR) corpus which includes different types of speech from 86 English speakers in-car environments with varying noise conditions: engine idling (IDL), 35mph with the windows up (35U), 35mph with the windows down (35D), 55mph with the windows up (55U), 55mph with the windows down (55D).

Using these five conditions, we tested accuracy levels based on weighted average word error rate (WER) against our competitors and found that Speechmatics’ ASR is incredibly more accurate in vehicles compared to some of the industry’s biggest names, with a difference in errors from 60% to 22% from the least accurate to the second most accurate provider.

This pattern is consistent across the five noise conditions: Speechmatics is the most accurate in every single condition. Critically, the data also shows that the absolute difference in accuracy between Speechmatics and the other closest provider doubles in conditions with the highest level of noise (the highest speed with windows down, 55D: 8% difference in accuracy) compared to the one with the lowest level of noise (the lowest speed with windows up, 35U: 4% difference in accuracy). This means that as the level of noise increases, the largest is the difference in accuracy between Speechmatics and competitors.

What the Future of Handsfree Driving Looks Like

Human speech is extremely complex and varied – there isn’t one single way to talk. Vehicular voice recognition is particularly challenging because it combines voice differences in humans with the loud, disruptive nature of cars or any other kind of vehicle. So, voice recognition that is actually useful in the real world needs to prioritize accuracy in a noisy environment more than anything else.

The motor industry is a huge use case for speech-to-text technology and manufacturers are taking notice. For example, Microsoft has partnered with BMW to create a vehicle software based on Microsoft’s Azure, which aims to create a more natural-sounding system that accounts for non-perfect speech. Amazon is going a step further, allowing manufacturers to build digital assistants using Alexa. This means that their tech can direct drivers to Amazon stores, even ordering groceries while driving. So, while speech-to-text technology certainly has safety benefits, the commercial advantages are massive.

At Speechmatics, our sole focus is to understand every voice, whether that is through the 7100+ languages on Earth or simply being able to find a fuel station near you. Accuracy matters and, in this case, it directly impacts the quality and safety of hands-free driving.

Recognizing this, vehicles’ AI is becoming more competent with each passing year – most modern cars have some form of cruise control and voice assistant. Consumers are beginning to expect this as the bare minimum - 73% of drivers anticipate using built-in voice assistants for one or more purposes by 2022. Furthermore, Automotive World says that by 2028, 90% of new vehicles worldwide will have in-care voice tech embedded.

It’s clear then, that voice recognition has a mobile future. Manufacturers must continue to innovate.

Benedetta Cevoli, Data Science Engineer, Speechmatics

Latest Articles

[alt: Bilingual medical model featuring terms related to various health conditions and medications in Arabic and English. Key terms include "Chronic kidney disease," "Heart attack," "Diabetes," and "Insulin," among others, displayed in an organized layout.]
Product

Speechmatics achieves a world first in bilingual Voice AI with new Arabic–English model

Sets a new accuracy bar for real-world code-switching: 35% fewer errors than the closest competitor.

Speechmatics
SpeechmaticsEditorial Team
[alt: Illuminated ancient mud-brick structures stand against a dusk sky, showcasing architectural details and textures. Palm trees are in the foreground, adding to the setting's ambiance. Visually captures a historic site in twilight.]
Product

Your voice agent speaks perfect Arabic. That's the problem.

Most voice AI models are trained on formal Arabic, but real conversations across the Middle East mix dialects and English in ways those systems aren’t built to handle.

Yahia Abaza
Yahia AbazaSenior Product Manger
new blog image header
Technical

How Nvidia Dominates the HuggingFace Leaderboards in This Key Metric

A technical deep-dive into Token Duration Transducers (TDT) — the frame-skipping architecture behind Nvidia's Parakeet models. Covers inference mechanics, training with forward-backward algorithm, and how TDT achieves up to 2.82x faster decoding than standard RNN-T.

Oliver Parish
Oliver Parish Machine Learning Engineer
[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]
Use Cases

Why AI-native EHR platforms will treat speech as core infrastructure in 2026

As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.

Vamsi Edara
Vamsi EdaraFounder and CEO, Edvak EHR
[alt: Logos of Speechmatics and Edvak are displayed side by side, interconnected by a stylized x symbol. The background features soft, wavy lines in light blue, creating a modern and tech-focused aesthetic.]
Company

One word changes everything: Speechmatics and Edvak EHR partner to make voice AI safe for clinical automation at scale

Turning real-time clinical speech into trusted, EHR-native automation.

Speechmatics
SpeechmaticsEditorial Team
[alt: Concentric circles radiate outward from a central orange icon with a white Speechmatics logo. The background is dark blue, enhancing the orange glow. A thin green line runs horizontally across the lower part of the image.]
Technical

Speed you can trust: The STT metrics that matter for voice agents

What “fast” actually means for voice agents — and why Pipecat’s TTFS + semantic accuracy is the clearest benchmark we’ve seen.

Archie McMullan
Archie McMullanSpeechmatics Graduate