Jun 22, 2022 | Read time 4 min

Accurate Speech-to-Text Is AI’s Solution to Vehicle Safety

Modern society means vehicles need to safely allow the driver to operate a smart-phone or tablet handsfree. Accurate speech-to-text, such as our ASR, is vital to road safety.
Accurate Speech-to-Text Is AI’s Solution to Vehicle Safety
Benedetta Cevoli
Benedetta CevoliSenior Machine Learning Engineer

In 2003, the UK government banned the use of hand-held mobile phones and other hand-held devices while driving a vehicle. In March 2022, they went a step further, banning drivers from taking photos or videos, and even scrolling through playlists or playing games. To use your phone, you need to be stationary with the keys removed from the ignition. Either that or an accurate speech-to-text system.

In our increasingly plugged-in society, dangerous distractions are available at your fingertips. 1 in every 4 car accidents in the US is caused by texting and driving, so, to avoid increased accidents, manufacturers are adapting to the times by using the latest speech-to-text technology in their vehicles.

Accurate voice recognition is necessary in any modern vehicle. A competent system should enable the driver to change a song, find a route/destination, set a service appointment, send emails/messages, and even set calendar appointments. In the future, we might be using our voice to direct self-driving cars, for example, by simply saying “drive me to London Paddington”.

Car environments are notoriously noisy. The level of noisiness can vary drastically depending on a multitude of driving conditions, such as the vehicle speed, road surface, engine revs as well as atmospheric conditions such as wind and rain. Such noisy environments are a perfect test for our aim to understand every voice in every situation. That’s why we set out to learn how our Autonomous Speech Recognition (ASR) fares while being used in vehicles.

Speechmatics Overtakes the Opposition

To do so, we used the Audio-Visual Speech Recognition in a Car (AVICAR) corpus which includes different types of speech from 86 English speakers in-car environments with varying noise conditions: engine idling (IDL), 35mph with the windows up (35U), 35mph with the windows down (35D), 55mph with the windows up (55U), 55mph with the windows down (55D).

Using these five conditions, we tested accuracy levels based on weighted average word error rate (WER) against our competitors and found that Speechmatics’ ASR is incredibly more accurate in vehicles compared to some of the industry’s biggest names, with a difference in errors from 60% to 22% from the least accurate to the second most accurate provider.

This pattern is consistent across the five noise conditions: Speechmatics is the most accurate in every single condition. Critically, the data also shows that the absolute difference in accuracy between Speechmatics and the other closest provider doubles in conditions with the highest level of noise (the highest speed with windows down, 55D: 8% difference in accuracy) compared to the one with the lowest level of noise (the lowest speed with windows up, 35U: 4% difference in accuracy). This means that as the level of noise increases, the largest is the difference in accuracy between Speechmatics and competitors.

What the Future of Handsfree Driving Looks Like

Human speech is extremely complex and varied – there isn’t one single way to talk. Vehicular voice recognition is particularly challenging because it combines voice differences in humans with the loud, disruptive nature of cars or any other kind of vehicle. So, voice recognition that is actually useful in the real world needs to prioritize accuracy in a noisy environment more than anything else.

The motor industry is a huge use case for speech-to-text technology and manufacturers are taking notice. For example, Microsoft has partnered with BMW to create a vehicle software based on Microsoft’s Azure, which aims to create a more natural-sounding system that accounts for non-perfect speech. Amazon is going a step further, allowing manufacturers to build digital assistants using Alexa. This means that their tech can direct drivers to Amazon stores, even ordering groceries while driving. So, while speech-to-text technology certainly has safety benefits, the commercial advantages are massive.

At Speechmatics, our sole focus is to understand every voice, whether that is through the 7100+ languages on Earth or simply being able to find a fuel station near you. Accuracy matters and, in this case, it directly impacts the quality and safety of hands-free driving.

Recognizing this, vehicles’ AI is becoming more competent with each passing year – most modern cars have some form of cruise control and voice assistant. Consumers are beginning to expect this as the bare minimum - 73% of drivers anticipate using built-in voice assistants for one or more purposes by 2022. Furthermore, Automotive World says that by 2028, 90% of new vehicles worldwide will have in-care voice tech embedded.

It’s clear then, that voice recognition has a mobile future. Manufacturers must continue to innovate.

Benedetta Cevoli, Data Science Engineer, Speechmatics

Latest Articles

Carousel slide image
Technical

How to build a microbatching workflow with the Speechmatics API

Build a cleaner path between batch and real time. Learn when micro-batching makes sense, how to chunk audio, submit jobs, stitch JSON, and scale safely with the Speechmatics API.

Speechmatics
SpeechmaticsEditorial Team
Carousel slide image
Product

Alphanumeric speech recognition: why voice assistants mangle SKUs (and how to fix it)

A guide for voice AI engineers, ecommerce platforms and warehouse teams on SKU recognition accuracy voice assistant deployments depend on: why speech recognition systems produce transcription errors on product codes, what to measure when error rates matter, and the fixes that move the needle on order picking, voice ordering and customer-facing voice AI.

Speechmatics
SpeechmaticsEditorial Team
Carousel slide image
Technical

The Adobe story: How we made cloud-grade AI work on your laptop

Behind the build: what it takes to make cloud-grade speech recognition work inside Adobe Premiere, and why Whisper raised the stakes.

Andrew Innes
Andrew InnesChief Architect
Carousel slide image
Company

Adobe and Speechmatics deliver cloud-grade speech recognition on-device for Premiere

Adobe Premiere users can run the most accurate on-device transcription locally; efficient enough for a laptop, powerful enough for professional work.

Speechmatics
SpeechmaticsEditorial Team
Carousel slide image
Use Cases

Best speech-to-text AI guide: APIs, platforms and services compared

Speech-to-text has moved from novelty to enterprise infrastructure. Here's how the leading platforms stack up in 2026 — and how to pick the right one.

Tom Young
Tom YoungDigital Specialist
Speechmatics x Thymia combine medical-grade speech-to-text with clinical-grade voice biomarker intelligence to identify health signals.
News

AI can now understand health signals from 15 seconds of your voice, including fatigue, stress and type 2 diabetes

The joint platform returns transcription and health signals in real time, with no additional hardware required.

Speechmatics
SpeechmaticsEditorial Team